Before co-founding Mutual Mobile, Mickey passed time…
During an interview back in 2013, Mickey offered:
“There’s really two things that you should focus on: density and diversity of experience. Put yourself in every situation you can have a new experience because that’s the most valuable thing. More valuable than capital…and anything else, is your own experience. And it’s often free, if you’re willing to go get it.”
Six years later, as Mutual Mobile celebrates its 10-year anniversary, it seems fitting to into dive into the digital movements that initially inspired, and continue to evolve, Mutual Mobile’s commitment to building truly mobile experiences.
Tech trends have always been intertwined. You get into things like Tensorflow and machine learning, robotics, autonomous cars, pizza-making robots, etc. They all have roots in academia, and these were the types of things that were hot academic research topics in the early 2000s. As a result, they culminated in things like the DARPA challenge–or other competitions to build cutting edge things. But there were limitations.
They were significant enough to essentially make a lot of this tech research go quiet for almost 10 years. One of the things holding back self-driving cars at that time were things like poor computer vision. And in order for machine learning to work, you needed huge data sets that we weren’t yet generating. The hardware to collect the data so the software could figure out what to do with it–it just wasn’t there yet.
The bigger story is around the research from the academic world moving to industry application. I had a professor at UT, Peter Stone, who said “there’s a time when things should go private and drive profits”. Actual application of the research is more exciting, and it was time to get started.
This transition is core to what emerging technology companies do. Just like people, you can’t learn from technology or teach it to do things without real life experience. Turning concepts and research into products and services takes time. You have to figure out how to make it scale. Seeing devices and connected things in hands and homes, for example, that’s when you know an emerging technology has been applied.
There’s going to be a whole host of new applications of technologies now that they’ve matured and been refined. Commercialization of emerging technology is cyclical. It’s interesting how we always overestimate what will happen in the next year, and underestimate progress in the next 10.
Considering we can’t evaluate what intelligence really is for people–physical, intellectual, social, etc.–how are we supposed to consistently define artificial intelligence?
AI is a tricky topic because it has so many historical diversions and false starts–probably more than any other topic in tech. It’s the easiest thing to be wrong about. Part of the reason is because AI predates computers. Take automotive–automatic transmissions, power steering, anti-lock brakes–technically these things do what we used to do manually.
With AI there is a sliding autonomy where you can make the change little by little. People have imagined AI in a wholesale way, and that can be problematic. Overall, the complexity of ownership makes it harder to forecast.
I think the question now is what more do we do with it? It seems like a solved problem. I remember when Amazon got letters criticizing their sale of facial recognition tech to the government. Jeff Bezos said, “well, we’re a tech company–we’ll sell it, but we don’t get into the whole privacy issue.”
It’s interesting to think about facial recognition because while people are amazing at it, computers are even better. Academia takes an objective stance because it can. It works and is functional. But the real-world application of this tech has a more human/subjective stance because it has to. Human consequences raise the stakes. Decisions aren’t yes or no, can or cannot–they must consider the what and the who.
I tend to think of it, especially being a new parent, as how we’re raising our digital child. 90% or more of tech and software development since the 50s has been teaching them nouns, adjectives, how to count–the basics. So from an intelligence perspective, our digital child is 2-3 years old.
We have hardware with a lot of sensors to gather information and experiences, but when it comes to fine motor skills and tactile development, we’re just getting started. It’s still the “look, but don’t touch!” mentality. Because we don’t have that level of trust yet. The digital child is still saying goofy things and asking a ton of questions.
The next step is more about action and decisions, so the stakes get higher. Driving a car, riding a bike, interpreting a situation more subtextually. The potential for it to be better than we are is where we get hung up because there’s fear around it.
As opposed to being afraid of machines taking over, what if we just make a commitment to raising responsible digital children. Teach empathy, kindness, and values that will make them better than us. Let’s be optimistic and look forward to the day where they show they have it better than we do.
Instilling creations with intelligence is instinctively human – goodness should always be the endgame.