Much has been and will be written about Tesla’s AI Day—2022. Nearly all of it will emphasize Tesla’s significant lead in automotive autonomous driving capability, FSD, it’s accelerating efforts in robotics with particular emphasis on its bipedal autonomous robot, Optimus, and to a lesser extent, on its development of a supercomputer, DoJo, that in many ways may be the most important aspect of its sexier FSD and Optimus efforts.
Millions of words will be used to analyze every technical bullet point and schematic. My intent in this post isn’t to reproduce this reporting, but rather, to provide a perspective from someone who has inhabited the Tesla ecosystem from the very early days.
Above: Tesla's Elon Musk (Flickr: Just Click's with a Camera)
As some readers may know, I’ve been a software and hardware engineer throughout my long career. In fact, my graduate research and work advocated and used early AI techniques (the precursors to artificial neural nets) for manufacturing (metal cutting, to be precise). This occurred way, way back in the 1970s—long before AI became chic and decades before the young engineers on the AI Day stage were even born!
The Tesla AI Day presentation is unique in many respects. Never, as far as I know, has any major tech company (I agree with Elon Musk that Tesla is NOT a car company), presented three hours of technical detail that describes its ongoing strategic projects. Sure, a tech giant might provide a view from 10,000 feet with a distinct marketing/PR feel, but never have I seen a group of young engineers, fronted by the company’s CEO, provide a view from a ground level that discusses technical architecture and approach. That in and of itself is impressive.
And never have I seen a CEO with Elon Musk’s grasp of technical detail. Most CEOs are one tech question deep, two at most. Elon, it seems, can get into the technical weeds and might even prefer to stay right there. That says a lot about him and what he values within the company.
But what of the AI Day presentation itself—what are the takeaways that many summaries miss?
In my view, Tesla buried the lede in its AI Day presentation—discussing its supercomputer, DoJo, last when it should have come first. Why? Because in many ways, DoJo will be pivotal in its efforts to create Level 4 autonomous driving in the next few years and then over the longer term, level 5 autonomous vehicles, along with its dream of autonomous fleets.
Petabytes of driving information are being gleaned from early users of Tesla's Full Self-Driving capability (FSD), but all of that information (e.g., data, video, exceptions) must be properly labeled and interpreted to enhance machine learning. To date, the task of labeling and interpreting this information has been daunting—too time-consuming for rapid progress.
DoJo is a unique, powerful and extensible supercomputer architecture that will change that. It is, in my view, the enabling factor that will move FSD into prime time (and no, as a beta tester for FSD, I do NOT believe the current FSD system is ready for prime time, even though it’s an awesome engineering achievement).
DoJo is no less important for the development of Optimus. In fact, the same rules apply. For an autonomous robot to intelligently navigate the world, it must recognize a mindbogglingly large collection of situations and objects, and be adaptable as it is asked to operate within more and more use cases. To do that, the machine learning approach for Optimus must be iterative and fast, responding to additional petabytes of information that must be labeled and integrated into the learning architecture. DoJo will enable that.
Finally, there was a single comment about DoJo, made by Elon, that may have escaped your notice, especially if you don’t follow AI closely. Elon suggested that DoJo may ultimately enable “AGI.” That’s a mind-blowing assertion and a somewhat ominous statement. For the uninitiated, AGI stands for “artificial general intelligence”—the holy grail of the AI community. A quick Google search defines AGI this way:
Artificial general intelligence (AGI) is the representation of generalized human cognitive abilities in software so that, faced with an unfamiliar task, the AGI system could find a solution. The intention of an AGI system is to perform any task that a human being is capable of.
That would include the design and implementation of even more powerful AGI systems, ultimately leading to a ‘superintelligence.’ It’s ironic that Elon is among many technology leaders who have argued that the unconstrained development of an AGI leading to superintelligence could be dangerous to humans in many ways that we can predict, as well as in some ways that are unforeseen.
In his comments about privacy and safety, Elon alluded to this, suggesting that a “regulatory agency” (like the NHTSA or FDA) is needed to control the technologies that DoJo could unleash. Yet others, like Nick Bostrom in his book, Superintelligence, argue that “We would want the solution to the [AGI] safety problem before somebody figures out the solution to the AI problem.”
Tesla builds the safest cars the world has ever seen. Let’s hope that Elon and the young engineers behind him have the wisdom to build the safest AI as well.
Guest Post: Roger Pressman, Founder of EVANNEX