Martin Viecha (00:00):

My name is Martin Viecha, VP of Investor Relations, and I’m joined today by Elon Musk, Vaibhav Taneja, and a number of other executives. Our Q1 results were announced at about 3:00 PM Central Time in the update deck we published at the same link as this webcast. During this call, we will discuss our business outlook and make forward-looking statements. These comments are based on our predictions and expectations as of today. Actual events and results could differ materially due to a number of risks and uncertainties, including those mentioned in our most recent filings with the SEC. During the question and answer portion of today’s call, please limit yourself to one question and one follow up. Please use the raise hand button to join the question queue. But before we jump into Q&A, Elon has some opening remarks. Elon.

Elon Musk (00:49):

Thanks, Martin. So to recap, in Q1 we navigated several unforeseen challenges as well as the ramp of the updated Model 3 in Fremont. There was, as we all have seen, the EV adoption rate globally is under pressure and a lot of other auto manufacturers are pulling back on EVs and pursuing plug-in hybrids instead. We believe this is not the right strategy and electric vehicles will ultimately dominate the market.
Despite these challenges, the Tesla team did a great job executing in a tough environment and energy storage deployments, the Megapack in particular, reached an all-time high in Q1 leading to record profitability for the energy business. And that looks likely to continue to increase in the quarters and years ahead. It will increase. We actually know that it will, so, significantly faster than the car business, as we expected. We also continue to expand our AI training capacity in Q1 more than doubling our training compute sequentially.

(01:58)
In terms of the new product roadmap, there’s been a lot of talk about our upcoming vehicle line in the past several weeks. We’ve updated our future vehicle lineup to accelerate the launch of new models ahead of previously mentioned startup production in the second half of 2025, so we expect it to be more like the early 2025 if not late this year. These new vehicles, including more affordable models, will use aspects of the next generation platform as well as aspects of our current platforms and will be able to produced on the same manufacturing lines as our current vehicle lineup. So it’s not contingent on any new factory or massive new production line. It’ll be made on our current production lines much more efficiently. And we think this should allow us to get to over 3 million vehicles of capacity, when realized to the full extent.
Regarding FSD version 12, which is the pure AI-based self-driving, if you haven’t experienced this, I strongly urge you to try it out. It’s profound. And the rate of improvement is rapid. And we’ve now turned that on for all cars with the cameras and inference computer and everything from hardware three on in North America. So it’s been pushed out to, I think around 1.8 million vehicles and we’re seeing about half of people use it so far. And that percentage is increasing with each passing week.

(03:43)
So we now have over 300 billion miles that have been driven with FSD V12. Since the launch of full self-driving, supervised pull self-driving, it’s become very clear that the vision-based approach with end-to-end neural networks is the right solution for scalable autonomy. And it’s really how humans drive. Our entire road network is designed for biological neural nets and eyes, so naturally cameras and digital neural nets, other solution to our current road system.
To make it more accessible, we’ve reduced the subscription price to $99 a month, so it’s easy to try out. And as we’ve announced, we’ll be showcasing our purpose-built robo taxi or cyber cab in August. Yeah. Regarding AI compute, over the past few months, we’ve been actively working on expanding Tesla’s core AI infrastructure. For a while there we were training constrained in our progress. We are, at this point, no longer training constrained, and so we’re making rapid progress. We’ve installed and commissioned, meaning they’re actually working, 35, 000 H100 computers or GPUs. GPU is the wrong word. They need a new word. Always feel like a wince when I say GPU because G stands for graphics and it doesn’t do graphics. But anyway, roughly 35,000 H100s are active and we expect that to be probably 85,000 or thereabouts by the end of this year in training, just for training. We are making sure that we’re being as efficient as possible in our training. It’s not just about the number of H100s, but how efficiently they’re used. So in conclusion, we’re super excited about our autonomy roadmap and [inaudible 00:05:45] should be obvious to anyone who’s driving version 12 in a test that is only a matter of time before we exceed the reliability of humans, and not much time at that. And we’re really headed for an electric vehicle, an autonomous future. And I go back to something I said several years ago, that in the future, gasoline cars that are not autonomous will be like riding a horse and using a flip phone. And that will become very obvious in hindsight. We continue to make the necessary investments that will drive growth and profits for Tesla in the future, and I wanted to thank the Tesla team for incredible execution during this period and look forward to everything that we have planned ahead. Thanks.

Martin Viecha (06:37):Thank you very much. And Vivek has some comments as well.

Vaibhav Taneja (06:40):

Thanks. It’s important to acknowledge what Elon said. From our auto business perspective, we did see a decline in revenues quarter over quarter, and those are primarily because of seasonality on certain macroeconomic environment and the other reasons which Elon had mentioned earlier. Auto margins declined from 18.9 to 18.5%, excluding the impact of Cybertruck. The impact of pricing actions was largely offset by reductions in per unit cost and the recognition of revenue from auto park feature for certain vehicles in the U.S. that previously did not have that functionality.
Additionally, while we did experience higher cost due to the ramp of Model 3 in Fremont and disruptions in Berlin, these costs were largely offset by cost reduction initiatives. In fact, if we exclude Cybertruck and Fremont Model 3 ramp costs, the revenue from auto park auto margins improved slightly. Currently, normalized model Y cost per vehicle in Austin and Berlin are already very close to that of Fremont.

Our ability to reduce costs without sacrificing on quality was due to the amazing efforts of the team in executing Tesla’s relentless pursuit of efficiency across the business. We’ve also witnessed that as other OEMs are pulling back on their investments in EV, there is increasing appetite for credits and that means a steady stream of revenue for us. Obviously seeing others pull back from EV is not the future we want. We would prefer it the whole industry went all in.

(08:25)
On the demand front, we have undertaken a variety of initiatives including lowering the price of both the purchase and subscription options for FSD, launching extremely attractive leasing specials for the Model 3 in the U.S. for 299 a month, and offering attracting financing options in certain markets. We believe that our awareness activities paired with attractive financing will go a long way in expanding our reach and driving demand for our products.
Our energy business continues to make meaningful progress with margins reaching a record of 24.6%. We expect the energy storage deployments for 2024 to grow at least 75% higher from 2023. And accordingly, this business will begin contributing significantly to our overall profitability. Note that there is a bit of lumpiness in our storage deployments due to a variety of factors that are outside of our control, so deployments may fluctuate quarter over quarter.

(09:24)
On the operating expense front, we saw a sequential increase from our AI initiatives, continued investment in future projects, marketing and other activities. We had negative free cash flow of 2.5 billion in the first quarter. The primary driver of this was an increase in inventory from a mismatch between builds and deliveries as discussed before, and our elevated spend on CapEx across various initiatives including AI compute. We expect the inventory bill to reverse in the second quarter and free cash flow to return to positive again. As we prepare the company for the next phase of growth, we had to make the hard but necessary decision to reduce our headcount by over 10%. The savings generated are expected to be well in excess of one billion on an annual run rate basis. We are also getting hyper focused on CapEx efficiency and utilizing our install capacity in a more efficient manner. The savings from these initiatives, including our cost reductions, will help improve our overall profitability and ultimately enable us to increase the scale of our investments in AI. In conclusion, the future is extremely bright and the journey to get there, while challenging, will be extremely rewarding. Once again, I would like to thank the whole Tesla team for delivering great results and we can open it up to Q&A.

Martin Viecha (10:55):Thank you. Okay, let’s start with investor Q&A. The first question is what is the status of 4680? What is the current output? Lars?

Speaker 1 (11:05):

Sure. 4680 production increased about 18, 20% from Q4 reaching greater than 1K a week for Cybertruck, which is about seven gigawatt hours per year as we posted on X. We expect to stay ahead of the Cybertruck ramp with the cell production throughout Q2 as we ramp a third of four lines in phase one, while maintaining multiple weeks of cell inventory to make sure we’re ahead of the ramp.
Because we’re ramping, COGS continues to drop rapidly week over week driven by yield improvements throughout the lines and production volume increases. So our goal, and we expect to do this, is to beat supplier costs of nickel based cells by the end of the year.

Martin Viecha (11:45):Thank you. The second question is on Optimus. So what is the current status of Optimus? Are they currently performing any factory tasks? When do you expect to start mass production?

Elon Musk (11:58):

We are able to do simple factory tasks, or at least I should say factory tasks in the lab. We do think we will have Optimus in limited production in the factory, in the actual factory itself doing useful tasks before the end of this year. And then I think we may be able to sell it externally by the end of next year. These are just guesses. As I’ve said before, I think Optimus will be more valuable than everything else combined because if you’ve got a sentient humanoid robot that is able to navigate reality and do tasks at request, there is no meaningful limit to the size of the economy. So that’s what’s going to happen. And I think Tesla is best position of any humanoid robot maker to be able to reach volume production with efficient inference on the robot itself. This perhaps is a point that is worth emphasizing. Tesla’s AI inference efficiency is vastly better than anyone, any other company. There’s no company even close to the difference efficiency of Tesla. We’ve had to do that because we were constrained by the inference hardware in the car. We didn’t have a choice. But that will pay dividends in many ways.

Martin Viecha (13:47):Thank you. The third question is, what is Tesla’s current assessment of the pathway towards regulatory approval for unsupervised FSD in the U.S, and how should we think about the appropriate safety threshold compared to human drivers?

Speaker 1 (14:03):

I can start. There are a handful of states that already have adopted autonomous vehicle laws. These states are paving the way for operations. While the data for such operations guides a broader adoption of driverless vehicles, I think Ashok can talk a little bit about our safety methodology, but we expect that these states and the work ongoing as well as the data that we’re providing will pave the way for a broad-based regulatory approval in the U.S. at least, and then other countries as well.

Elon Musk (14:33):

Yeah. It’s actually been pretty helpful that other autonomous car companies have been cutting a path through the regulatory jungle. So that’s actually quite helpful. And they have obviously been operating in San Francisco for a while. I think they got approval for City of LA, so these approvals are happening rapidly. I think if you’ve got at scale a statistically significant amount of data that shows conclusively that the autonomous car has, let’s say, half the accident rate of a human driven car, I think that’s difficult to ignore because at that point, stopping autonomy means killing people.
So I actually do not think that it will be significant regulatory barriers provided there is conclusive data that the autonomous car is safer than a human driven car. And in my view, this will be much like elevators. Elevators used to be operated by a guy with a relay switch, but sometimes that guy would get tired or drunk or just make a mistake and [inaudible 00:15:46] somebody in half between floors. So now we just get in an elevator and press a button. We don’t even think about it. In fact, it’s kind of weird if somebody’s standing there with a relay switch. That’ll be how cars work. You just summon a car using your phone, you get in, it takes you to a destination, you get out.

Martin Viecha (16:07):

You don’t even think about it.

Elon Musk (16:08):

You don’t even think about it. Just like an elevator. It takes you to your floor. That’s it. Don’t think about how the elevator is working or anything like that. And something I should clarify is that Tesla will be operating the fleet. So you can think of Tesla like some combination of Airbnb and Uber, meaning that there’ll be some number of cars that Tesla owns itself and operates in the fleet. And then there’ll be a bunch of cars where they’re owned by the end user, but that end user can add or subtract their car to the fleet whenever they want, and they can decide if they want to only let the car be used by friends and family or only by five-star users or by anyone at any time they could have the car come back to them and be exclusively theirs like an Airbnb.
You could rent out your guest room or not, any time you want. So, as our fleet grows, we have 9 million cars going to eventually tens of millions of cars worldwide. With a constant feedback loop, every time something goes wrong, that gets added to the training data and you get this training flywheel happening in the same way that Google Search has the sort of flywheel, it’s very difficult to compete with Google because people are constantly doing searches and clicking, and Google is getting that feedback loop. So, the same with Tesla, but at the scale that is maybe difficult to comprehend, but ultimately, it will be tens of millions.

(18:12)
I think there’s also some potential here for an AWS element down the road where if we’ve got very powerful inference because we’ve got a Hardware 3 in the cars, but now all cars are being made with Hardware 4. Hardware 5 is pretty much designed and should be in cars, hopefully toward the end of next year. And there’s a potential, when the car is not moving, to actually run distributed inference.
So kind of like AWS, but distributed inference. Like it takes a lot of computers to train an AI model, but many orders of magnitude less compute to run it. So, if you can imagine future, perhaps where there’s a fleet of 100 million Teslas, and on average, they’ve got like maybe a kilowatt of inference compute. That’s 100 gigawatts of inference compute distributed all around the world.
It’s pretty hard to put together 100 gigawatts of AI compute. And even in an autonomous future where the car is, perhaps, instead of being used 10 hours a week, it is used 50 hours a week, that still leaves over 100 hours a week where the car inference computer could be doing something else. And it seems like it will be a waste not to use it.

Martin Viecha (19:52):Ashok, do you want to chime in on the process and safety?

Ashok (19:55):

Yeah. We have multiple years of validating the safety for in any given week, we train hundreds of neural networks that can produce different trajectories for how to drive the car, replay them through the millions of clips that we have already collected from our users and our own QA. Those are like critical events, like someone jumping out in front or like other critical events that we have gathered database over many, many years, and we replay through all of them to make sure that we are net improving safety. We have simulation systems that also try to create this and test this in close to fashion. And some of this is validated, we give it to our QA networks.
We have hundreds of them in different cities, in San Francisco, Los Angeles, Austin, New York, a lot of different locations. They are also driving this and collecting real-world miles, and we have an estimate of what are the critical events, are they net improvement compared to the previous-weeks builds. And once we have confidence that the build is a net improvement, then we start shipping to early users, like 2,000 employees initially that they would like it to build, they will give feedback on like if it’s an improvement there or they’re noting some new issues that we did not capture in our own QA process. And only after all of this is validated, then we go to external customers.

(21:09)
And even when we go external, we have like live dashboards of monitoring every critical event that’s happening in the fleet sorted by the criticality of it. So, we are having a constant pulse on the build quality and the safety improvement along the way. And then any failures like Elon alluded to, we get the data back, add it to the training and that improves the model in the next cycle. So, we have this like constant feedback loop of issues, fixes, evaluations and then rinse and repeat.

And especially with the new V12 architecture, all of this is automatically improving without requiring much engineering interventions in the sense that engineers don’t have to be creative and like how they code the algorithms. It’s mostly learning on its own based on data. So, you see that, okay, every failure or like this is how a person chooses is how you drive the intersection or something like that, they get the data back. We add it to the neural network, and it learns from that trained data automatically instead of some engineers saying that, oh, here, you must rotate the steering wheel by this much or something like that.
There’s no hard inference conditions. Everything is neural network. It’s pretty soft. It’s probabilistic, so it will adopt probabilistic distribution based on the new data that it’s getting.

Elon Musk (22:20):

Yeah. And we do have some insight into how good things will be in like, let’s say, three or four months because we have advanced models that are far more capable than what is in the car, but have some issues with them that we need to fix. So, there’ll be a step change improvement in the capabilities of the car, but it will have some quirks that need to be addressed in order to release it. As Ashok was saying, we have to be very careful in what we release the fleet or to customers in general.
So, if we look at, say, 12.4 and 12.5, which really could arguably even be Version 13, Version 14 because it’s pretty close to a total retrain of the neural nets in each case are substantially different. So, we have good insight into how well the car will perform in, say, three or four months.

Ashok (23:26):

Yeah. In terms of scaling loss, people in the community generally talk about model scaling loss where they increase the model size a lot and then their corresponding gains in performance, but we have also figured out scaling loss and other access in addition to the model side scaling, making also data scaling. You can increase the amount of data you use to train the neural network and that also gives similar gains and you can also scale up by training compute, you can train it for much longer and one more GPUs or more dojo nodes that also gives better performance. And you can also have architecture scaling where you count with better architectures for the same amount of compute produce better results. So, a combination of model size scaling, data scaling, training compute scaling and the architecture scaling. We can basically extrapolate, okay, with the continue scaling based at this ratio, we can perfect big future performance.
Obviously, it takes time to do the experiments because it takes a few weeks to train, it takes a few weeks to collect tens of millions of video clips and process all of them, but you can estimate what is going to be the future progress based on the trends that we have seen in the past, and they’re generally held true based on past data.

Martin Viecha (24:40):Okay. Thank you very much. I’ll go to the next question, which is, can we get an official announcement of the time line for the $25,000 vehicle?

Speaker 1 (24:49):

I think Elon mentioned it in the opening remarks. But as you mentioned, we’re updating our future vehicle lineup to accelerate the launch of our low-cost vehicles in a more capex- efficient way. That’s our mission to get the most affordable cars to customers as fast as possible. These new vehicles we built on our existing lines and open capacity, and that’s a major shift to utilize all our capacity with marginal capex before we go spend high capex [inaudible 00:25:16].

Elon Musk (25:17):

We’ll talk about this more on August 8, but really, the way to think of Tesla is almost entirely in terms of solving autonomy and being able to turn on that autonomy for a gigantic fleet. And I think it might be the biggest asset value appreciation history when that day happens when you can do unsupervised full self-driving.

Speaker 1 (25:40):5 million cars.

Elon Musk (25:41):

Yeah. [inaudible 00:25:42]. It will be 7 million cars in a year or so and then 10 million, and then eventually, we’re talking about tens of millions of cars. Not eventually, it’s like before the end of this year. But in the decade, it’s several tens of million cars, I think.

Martin Viecha (26:09):Thank you. The next question is, what is the progress on the Cybertruck ramp?

Speaker 1 (26:15):

I can take that one, too. Cybertruck had 1K a week just a couple of weeks ago. This happened in the first four to five months since we SOP late last year. Of course, volume production is what matters. That’s what drives costs and so our costs are dropping, but the ramp still faces like a lot of challenges with so many new technologies, some supplier limitations, et cetera, and will continue to ramp this year, just focusing on cost efficiency and quality.

Martin Viecha (26:40):Okay. Thank you. The next question, have any of the legacy automakers contacted Tesla about possibly licensing FSD in the future?

Elon Musk (26:51):We’re in conversations with one major automaker regarding licensing FSD.

Martin Viecha (26:57):

Thank you. The next question is about the robotaxi. Elon already talked about that. So, we’ll have to wait till August.
The following question is about the next-generation vehicle. We already talked about that. So, let’s go to the semi. What is the time line for scaling semi?

Speaker 1 (27:20):

So, we’re finalizing the engineering of semi to enable like a super cost-effective high-volume production with our learnings from our fleet and our pilot fleet and Pepsi’s fleet, which we are expanding this year marginally. In parallel, as we showed in the shareholders’ deck, we have started construction on the factory in Reno. Our first vehicles are planned for late 2025 with external customers starting in 2026.

Martin Viecha (27:48):Okay. A couple more questions. So, our favorite, can we make FSD transfer permanent until FSD is fully delivered with Level 5 autonomy?

Speaker 1 (27:57):No.

Martin Viecha (27:58):Okay. Next question. What is the getting the production ramp at Lathrop, where do you see the Megapack run rate at the end of the year? Mike?

Mike (28:12):

Yeah. Lathrop is ramping as planned. We have our second GA line allowing us to increase our exit rate from 20 gigawatt hours per year at the start of this year to 40 gigawatt hours per year by the end of the year. That line’s commissioned.
There’s really nothing limiting the ramp. Given the longer sales cycles for these large projects, we typically have order visibility 12 to 24 months prior to ship dates. So, we’re able to plan the build plan several quarters in advance. So this allows us to ramp the factory to align with the business and order growth.
Lastly, we’d like to thank our customers globally for their trust in Tesla as a partner for these incredible projects.

Martin Viecha (28:53):

Okay, thank you very much. Let’s go to analyst questions. The first question comes from Toni Sacconaghi from Bernstein. Tony, please go ahead and unmute.