Tesla held its “AI Day” event in which the CEO of Tesla, Elon, laid down some essential development from the Tesla AI Team and showcased some interesting projects. The event was live-streamed on the official YouTube channel of Tesla.
So, in this blog post, we are going to cover the four most important highlights from the entire stream:
Unveiling of the chip to train Dojo
Tesla, Chief Executive Engineer, Ganesh P. Venkataramanan unveiled that Tesla’s in-house designed and built computer chip that the company uses to power its supercomputer, named Dojo. As much of Tesla’s AI architecture relies on the Dojo, which is a neural network training computer that Elon revealed will be able to process a huge amount of camera imaging data which is four times faster than any other computer system. The idea is that the AI software, which is formed by the Dojo, will be provided to Tesla customers with the help of live updates.
The chip Tesla unveiled is known as “D1”, and it contains 7nm technology. Once again, Venkataramanan proudly unveiled the chip which he said that GPU level computation with CPU connectivity and twice the I/O bandwidth of the state of the art network switching chips that even exist today and are supposed to be the benchmarks. He even went through some of the technical parts of the chip, which explains that Tesla wanted to own as much of its technology stack as possible to avoid any bottlenecks. Tesla revealed a next-gen computer chip built by Samsung in 2020, but it has not managed to escape the global chip shortage that has plagued the auto industry for a few months. To survive on the purpose of shortage, Elon revealed on earning call this summer that the company was forced to rewrite some vehicle software after replacing it with alternative chips.
Solving Computer Vision Problems
On AI day, Tesla again supported its computer vision-based approach to autonomy. This approach uses machine learning and neural networks to allow the care to operate anywhere on earth with the help of its “autopilot” system. Andrej Karpathy, the Chief AI Officer of Tesla, describe Tesla’s architecture as “building an animal from scratch” that moves, senses it’s surrounding, and even acts intelligently and autonomously based on it.
He also illustrates that how Tesla’s neural network developed with time to time and how now the car’s visual cortex, which is the important part of the car brain that processes some of the visual information, is designed with the larger neural network architecture so that information flows via the system more intelligently.
Dojo will be going to be the technology behind the Tesla FSD system. The supercomputer consists of several aspects such as simulation architecture that the company even hopes to expand to the universal and even open up to tech companies and automakers.
Tesla also revealed that it is working on building core algorithms that drive the car by simply “creating a high fidelity representation of the world and planning trajectories in that space”. The company also has more number of plans on creating an algorithm by creating information from the car’s sensors across space and time to create ground truth data. This will even allow the neural network to predict at the time of driving.
By ruling to the next-gen of automation, Elon took to the conference to reveal that his company is working on a humanoid robot. An alien-like dancer in a shiny black mask and white bodysuit.
The robot will be going to use Tesla’s already existing technology for its automated machines and SLV software. The code name for the bot by which it is going to be known is ‘Optimus’.
Elon Musk also makes sure that Optimus is “intended to be so much friendly” and ensured that this build on a physical and mechanical level. This usually means that the user can overpower the robot five feet eight inches frame. Optimus will going to have a weigh of around 125 pounds and will have a screen for face recognition.