Meta Platforms, Inc. (META)

First Quarter 2024 Results Conference Call

April 24th, 2024

Ken Dorell, Director, Investor Relations

Thank you. Good afternoon and welcome to Meta Platforms first quarter 2024 earnings conference call. Joining me today to discuss our results are Mark Zuckerberg, CEO and Susan Li, CFO.

Before we get started, I would like to take this opportunity to remind you that our remarks today will include forward‐looking statements. Actual results may differ materially from those contemplated by these forward‐looking statements.

Factors that could cause these results to differ materially are set forth in today's earnings press release, and in our annual report on form 10-K filed with the SEC. Any forward‐looking statements that we make on this call are based on assumptions as of today and we undertake no obligation to update these statements as a result of new information or future events.

During this call we will present both GAAP and certain non‐GAAP financial measures. A reconciliation of GAAP to non‐GAAP measures is included in today's earnings press release. The earnings press release and an accompanying investor presentation are available on our website at investor.fb.com.

And now, I'd like to turn the call over to Mark.

Mark Zuckerberg, CEO

Thanks Ken, and hey everyone -- thanks for joining.

It's been a good start to the year -- both in terms of product momentum and business performance. We estimate that more than 3.2 billion people use at least one of our apps each day, and we're seeing healthy growth in the US. I want to call out WhatsApp specifically, where the number of daily actives and message sends in the US keeps gaining momentum and I think we're on a good path there. We've also made good progress on our AI and metaverse efforts, and that's where I'm going to focus most of my comments today.

So let's start with AI. We're building a number of different AI services, from Meta AI, our AI assistant that you can ask any question across our apps and glasses, to creator AIs that help creators engage their communities and that fans can interact with, to business AIs that we think every business eventually on our platform will use to help customers buy things and get customer support, to internal coding and development AIs, to hardware like glasses for people to interact with AIs, and a lot more.

Last week we had the major release of our new version of Meta AI that is now powered by our latest model, Llama 3. And our goal with Meta AI is to build the world's leading AI service both in quality and usage.

The initial rollout of Meta AI is going well. Tens of millions of people have already tried it. The feedback is very positive and, when I first checked in with our teams, the majority of feedback we were getting was people asking us to release Meta AI for them, wherever they are. We've started launching Meta AI

1

in some English‐speaking countries, and we'll roll out in more languages and countries over the coming months.

You all know our product development playbook by this point. We release an early version of a product to a limited audience to gather feedback and start improving it, and then once we think it's ready then we make it available to more people. That early release was last fall and with this release we're now moving to that next growth phase of our playbook. We believe that Meta AI with Llama 3 is now the most intelligent AI assistant that you can freely use. And now that we have the superior quality product, we're making it easier for lots of people to use it within WhatsApp, Messenger, Instagram, and Facebook.

Now in addition to answering more complex queries, a few other notable and unique features from this release: Meta AI now creates animations from still images, and now generates high quality images so fast that it can create and update them as you're typing, which is pretty awesome. I've seen a lot of people commenting about this experience online and how they've never seen or experienced anything like it before.

In terms of the core AI model and intelligence that's powering Meta AI, I'm very pleased with how Llama 3 has come together so far. The 8B and 70B parameter models that we released are best-in-class for their scale. The 400+B parameter model that we're still training seems on track to be industry-leading on several benchmarks. And I expect that our models are just going to improve further from open source contributions.

Overall, I view the results our teams have achieved here as another key milestone in showing that we have the talent, data, and ability to scale infrastructure to build the world's leading AI models and services. And this leads me to believe that we should invest significantly more over the coming years to build even more advanced models and the largest scale AI services in the world. As we're scaling capex and energy expenses for AI, we'll continue focusing on operating the rest of our company efficiently. But realistically, even with shifting many of our existing resources to focus on AI, we'll still grow our investment envelope meaningfully before we make much revenue from some of these new products.

I think it's worth calling out that we've historically seen a lot of volatility in our stock during this phase of our product playbook ‐‐ where we're investing in scaling a new product but aren't yet monetizing it. We saw this with Reels, Stories, as News Feed transitioned to mobile and more. And I also expect to see a multi‐year investment cycle before we've fully scaled Meta AI, business AIs, and more into the profitable services I expect as well. Historically, investing to build these new scaled experiences in our apps has been a very good long term investment for us and for investors who have stuck with us. And the initial signs are quite positive here too. But building the leading AI will also be a larger undertaking than the other experiences we've added to our apps, and this is likely going to take several years.

On the upside, once our new AI services reach scale, we have a strong track record of monetizing them effectively. There are several ways to build a massive business here, including scaling business messaging, introducing ads or paid content into AI interactions, and enabling people to pay to use bigger AI models and access more compute. And on top of those, AI is already helping us improve app engagement which naturally leads to seeing more ads, and improving ads directly to deliver more value. So if the technology and products evolve in the way that we hope, each of those will unlock massive amounts of value for people and business for us over time.

2

We are seeing good progress on some of these efforts already. Right now, about 30% of the posts on Facebook feed are delivered by our AI recommendation system. That's up 2x over the last couple of years. And for the first time ever, more than 50% of the content people see on Instagram is now AI recommended. AI has also always been a huge part of how we create value for advertisers by showing people more relevant ads, and if you look at our two end-to-endAI-powered tools, Advantage+ Shopping and Advantage+ App Campaigns, revenue flowing through those has more than doubled since last year.

We're also going to continue to be very focused on efficiency as we scale Meta AI and other AI services. Some of this will come from improving how we train and run models. Some improvements will come from the open source community -- where improving cost efficiency is one of the main areas I expect that open sourcing will help us improve, similar to what we saw with Open Compute. We'll also keep making progress on building more of our own silicon. Our Meta Training and Inference Accelerator chip has successfully enabled us to run some of our recommendations-related workloads on this less expensive stack, and as this program matures over the coming years we plan to expand this to more of our workloads as well. And of course as we ramp these investments, we will also continue to carefully manage headcount and other expense growth throughout the company.

In addition to our work on AI, our other long term focus is the metaverse. It's been interesting to see how these two themes have come together.

This is clearest when you look at glasses. I used to think that AR glasses wouldn't really be a mainstream product until we had full holographic displays -- and I still think that will be awesome and is mature state of the product. But now it seems pretty clear that there's also a meaningful market for fashionable AI glasses without a display. Glasses are the ideal device for an AI assistant because you can let them see what you see and hear what you hear, so they have full context on what's going on around you as they help you with whatever you're trying to do. Our launch this week of Meta AI with Vision on the glasses is a good example where you can now ask questions about things you're looking at.

One strategy dynamic that I've been reflecting on is that an increasing amount of our Reality Labs work is going towards serving our AI efforts. We currently report on our financials as if Family of Apps and Reality Labs were two completely separate businesses, but strategically I think of them as fundamentally the same business with the vision of Reality Labs to build the next generation of computing platforms in large part so that way we can build the best apps and experiences on top of them. Over time, we'll need to find better ways to articulate the value that's generated here across both segments so it doesn't just seem like our hardware costs increase as our glasses ecosystem scales but all the value flows to a different segment.

The Ray-Ban Meta glasses that we built with Essilor Luxottica continue to do well and are sold out in many styles and colors, so we're working to make more and release additional styles as quickly as we can. We just released the new cat-eye Skyler design yesterday, which is more feminine, and in general, I'm optimistic about our approach of starting with classics and expanding with an increasing diversity of options over time. If we want everyone to be able to use wearable AI, I think eyewear is a bit different from phones or watches in that people are going to want very different designs. So I think our approach of partnering with leading eyewear brands will help us serve more of the market.

I think a similar open ecosystem approach will help us expand the mixed reality headset market over time as well. We announced that we're opening up Meta Horizon OS, the operating system we've built

3

to power Quest. As the ecosystem grows, I think there will be sufficient diversity in how people use mixed reality that there will be demand for more designs than we'll be able to build. For example, a work-focused headset may be slightly less designed for motion but may want to be lighter by connecting to your laptop. A fitness-focused headset may be lighter with sweat-wicking materials. An entertainment-focused headset may prioritize the highest-resolution displays over everything else. A gaming-focused headset may prioritize peripherals and haptics, or a device that comes with Xbox controllers and a Game Pass subscription out of the box.

To be clear, I think that our first-party Quest devices will continue to be the most popular headsets as we see today, and we'll continue focusing on advancing the state-of-the-art tech and making it accessible to everyone. But I also think that opening our ecosystem and opening our operating system will help grow the ecosystem even faster.

In addition to AI and the metaverse, we're seeing good improvements across our apps. I touched on some of the most important trends with WhatsApp growth in the US and AI-powered recommendations in our feeds and Reels already, but I wanted to mention that video continues to be a bright spot. This month we launched an update full-screen video player on Facebook that brings together Reels, longer videos, and live content into a single experience with a unified recommendation system. On Instagram, Reels and video continue to drive engagement, with Reels alone now making up 50% of the time that's spent within the app.

Threads is growing well too. There are now more than 150M monthly actives, and it continues to generally be on the trajectory I hoped to see. My daughters would want me to mention that Taylor Swift is now on Threads -- that was a big deal in my house.

Okay, that's what I wanted to cover today. I'm proud of the progress we've made so far this year. We've got a lot more execution ahead to fulfill the opportunities ahead of us. A big thank you to our teams driving all these advances, and to all of you for being on this journey with us. And now, here is Susan.

Susan Li, CFO

Thanks Mark and good afternoon everyone.

Let's begin with our consolidated results. All comparisons are on a year-over-year basis unless otherwise noted.

Q1 total revenue was $36.5 billion, up 27% on both a reported and constant currency basis.

Q1 total expenses were $22.6 billion, up 6% compared to last year.

In terms of the specific line items:

Cost of revenue increased 9%, as higher infrastructure-related costs were partially offset by lapping Reality Labs inventory-related valuation adjustments.

R&D increased 6%, driven mostly by higher headcount-related expenses and infrastructure costs, which were partially offset by lower restructuring costs.

4

Marketing & Sales decreased 16%, due mainly to lower restructuring costs, professional services and marketing spend.

G&A increased 20%, as higher legal-related expenses were partially offset by lower restructuring costs.

We ended the first quarter with over 69,300 employees, up 3% from Q4.

First quarter operating income was $13.8 billion, representing a 38% operating margin.

Our tax rate for the quarter was 13%.

Net income was $12.4 billion or $4.71 per share.

Capital expenditures, including principal payments on finance leases, were $6.7 billion, driven by investments in servers, data centers and network infrastructure.

Free cash flow was $12.5 billion. We repurchased $14.6 billion of our Class A common stock and paid $1.3 billion in dividends to shareholders, ending the quarter with $58.1 billion in cash and marketable securities and $18.4 billion in debt.

Moving now to our segment results.

I'll begin with our Family of Apps segment.

Our community across the Family of Apps continues to grow, with approximately 3.2 billion people using at least one of our Family of Apps on a daily basis in March.

Q1 Total Family of Apps revenue was $36.0 billion, up 27% year over year.

Q1 Family of Apps ad revenue was $35.6 billion, up 27% or 26% on a constant currency basis.

Within ad revenue, the online commerce vertical was the largest contributor to year-over-year growth, followed by gaming and entertainment and media.

On a user geography basis, ad revenue growth was strongest in Rest of World and Europe at 40% and 33%, respectively. Asia-Pacific grew 25% and North America grew 22%.

In Q1, the total number of ad impressions served across our services increased 20% and the average price per ad increased 6%. Impression growth was mainly driven by Asia-Pacific and Rest of World.

Pricing growth was driven by advertiser demand, which was partially offset by strong impression growth, particularly from lower-monetizing regions and surfaces.

Family of Apps other revenue was $380 million in Q1, up 85%, driven by business messaging revenue growth from our WhatsApp Business Platform.

We continue to direct the majority of our investments toward the development and operation of our Family of Apps. In Q1, Family of Apps expenses were $18.4 billion, representing approximately 81% of

5

our overall expenses. Family of Apps expenses were up 7% due mainly to higher legal and infrastructure costs that were partially offset by lower restructuring costs.

Family of Apps operating income was $17.7 billion, representing a 49% operating margin.

Within our Reality Labs segment, Q1 revenue was $440 million, up 30% driven by Quest headset sales. Reality Labs expenses were $4.3 billion, down 1% year-over-year as higher headcount-related expenses were more than offset by lapping inventory-related valuation adjustments and restructuring costs.

Reality Labs operating loss was $3.8 billion.

Turning now to the business outlook. There are two primary factors that drive our revenue performance: Our ability to deliver engaging experiences for our community, and our effectiveness at monetizing that engagement over time.

On the first - we remain pleased with engagement trends and have strong momentum across our product priorities.

Our investments in developing increasingly advanced recommendation systems continue to drive incremental engagement on our platforms, demonstrating that people are finding added value by discovering content from accounts they're not connected to. The level of recommended content in our apps has scaled as we've improved these systems, and we see further opportunity to increase the relevance and personalization of recommendations as we advance our models.

Video also continues to grow across our platform, and it now represents more than 60% of time on both Facebook and Instagram. Reels remains the primary driver of that growth, and we're progressing on our work to bring together Reels, longer-form video and Live video into one experience on Facebook. In April, we rolled out this unified video experience in the US and Canada, which is increasingly powered by our next generation ranking architecture that we expect will help deliver more relevant video recommendations over time.

We're also introducing deeper integrations of generative AI into our apps in the US and more than a dozen other countries. Along with using Meta AI within our chat surfaces, people will now be able to use Meta AI in Search within our apps, as well as Feed and Groups on Facebook. We expect these integrations will complement our social discovery strategy as our recommendation systems help people to discover and explore their interests while Meta AI enables them to dive deeper on topics they're interested in.

Threads also continues to see good traction as we continue to ship valuable features and scale the community.

Now to the second driver of our revenue performance: increasing monetization efficiency. There are two parts to this work.

The first is optimizing the level of ads within organic engagement. Here, we continue to advance our understanding of users' preferences for viewing ads to more effectively optimize the right time, place and person to show an ad to. For example, we are getting better at adjusting the placement and number of ads in real time, based on our perception of a user's interest in ad content and to minimize disruption

6

from ads, as well as innovating on new and creative ad formats. We expect to continue that work going forward, while surfaces with relatively lower levels of monetization, like video and messaging, will serve as additional growth opportunities.

The second part of improving monetization efficiency is enhancing marketing performance. Similar to our work with organic recommendations, AI is playing an increasing role in these efforts.

First, we are making ongoing ads modeling improvements that are delivering better performance for advertisers. One example is our new ads ranking architecture, Meta Lattice, which we began rolling out more broadly last year. This new architecture allows us to run significantly larger models that generalize learnings across objectives and surfaces in place of numerous, smaller ads models that have historically been optimized for individual objectives and surfaces. This is not only leading to increased efficiency as we operate fewer models, but also improving ad performance.

Another way we're leveraging AI is to provide increased automation for advertisers. Through our Advantage+ portfolio, advertisers can automate one step of the campaign set up process - such as selecting which ad creative to show - or automate their campaign completely using our end-to-end automation tools, Advantage+ Shopping and Advantage+ App ads. We're seeing growing use of these solutions, and we expect to drive further adoption over the course of the year while applying what we learn to our broader ads investments.

Next, I'd like to discuss our approach to capital allocation.

We continue to see compelling investment opportunities to both improve our core business in the near- term and capture significant longer-term opportunities in generative AI and Reality Labs. As we develop more advanced and compute-intensive recommendation models and scale capacity for our generative AI training and inference needs, we expect that having sufficient infrastructure capacity will be critical to realizing many of these opportunities. As a result, we expect that we will invest significantly more in infrastructure over the coming years.

Our other long-term initiative that we're continuing to make significant investments in is Reality Labs. We are also starting to see our AI initiatives increasingly overlap with our Reality Labs work. For example, with RayBan Meta smart glasses, people in the US & Canada can now use our multimodal Meta AI assistant for daily tasks without pulling out their phone. Longer-term, we expect generative AI to play an increasing role in our mixed reality products, making it easier to develop immersive experiences. Accelerating our AI efforts will help ensure we can provide the best version of our services as we transition to the next computing platform.

We expect to pursue these opportunities while maintaining a focus on operating discipline, and believe our strong financial position will allow us to support these investments while also returning capital to shareholders through share repurchases and dividends.

In addition, we continue to monitor an active regulatory landscape, including the increasing legal and regulatory headwinds in the EU and the US that could significantly impact our business and our financial results. We also have a jury trial scheduled for June in a suit brought by the State of Texas regarding our use of facial recognition technology, which could ultimately result in a material loss.

Turning now to the revenue outlook.

7

We expect second quarter 2024 total revenue to be in the range of $36.5-39 billion. Our guidance assumes foreign currency is a 1% headwind to year-over-year total revenue growth, based on current exchange rates.

Turning now to the expense outlook.

We expect full-year 2024 total expenses to be in the range of $96-99 billion, updated from our prior outlook of $94-99 billion due to higher infrastructure and legal costs. For Reality Labs, we continue to expect operating losses to increase meaningfully year-over-year due to our ongoing product development efforts and our investments to further scale our ecosystem.

Turning now to the capex outlook.

We anticipate our full-year 2024 capital expenditures will be in the range of $35-40 billion, increased from our prior range of $30-37 billion as we continue to accelerate our infrastructure investments to support our AI roadmap. While we are not providing guidance for years beyond 2024, we expect capex will continue to increase next year as we invest aggressively to support our ambitious AI research and product development efforts.

On to tax. Absent any changes to our tax landscape, we expect our full-year 2024 tax rate to be in the mid-teens.

In closing, Q1 was a good start to the year. We're seeing strong momentum within our Family of Apps and are making important progress on our longer-term AI and Reality Labs initiatives that have the potential to transform the way people interact with our services over the coming years.

With that, Krista, let's open up the call for questions.

Operator:

Thank you. We will now open the lines for a question and answer session. To ask a

question, please press star one on your touch tone phone. To withdraw your

question, again press star one. Please limit yourself to one question. Please pick up

your handset before asking your question to ensure clarity. If you are streaming

today's call, please mute your computer speakers. And your first question comes

from the line of Eric Sheridan from Goldman Sachs. Please go ahead.

Eric Sheridan:

Thank you so much for taking the questions. Maybe I'll ask a two-parter. Mark, you

used the analogy of other investment cycles you've been through around products

like Stories and Reels.

I know you're not giving long-term guidance today, but using those analogies, how

should investors think about the length and depth of this investment cycle with

respect to either AI and/or Reality Labs more broadly and mixed reality?

And you both talked about the impact AI is having on the advertising ecosystem.

What are you watching for in terms of adoption or utility on the consumer side to

know that AI adoption is tracking along with the investment cycle? Thank you.

8

Mark Zuckerberg:

Yes. In terms of the timing, I think it's somewhat difficult to extrapolate from

previous cycles. But I guess like the main thing that we see is that we will usually

take, I don't know a couple of years, could be a little more, could be less, to focus on

building out and scaling the products.

And we typically don't focus that much on monetization of the new areas until they

reach significant scale because it's so much higher leverage for us just to improve

monetization on other things before these new products are at scale.

So you enter this period where I think kind of smart investors see that the product is

scaling and that there's a clear monetizable opportunity there even before the

revenue materializes. And I think we've seen that with Reels and with Stories and

with the shift to mobile and all these things, where basically, we build out the

inventory first for a period of time and then we monetize it.

And during that time, when it's scaling, it's -- sometimes it's not just the case that

we're not making money from that thing.

It can often actually be the case that it displaces other revenue from other things. So

like you saw with Reels, I mean it scaled and there was a period where it was not

profitable for us as it was scaling before it became profitable. So I think that's more

the analogy that I'm making on this.

But I think it's -- what that suggests is that what we should all be focused on for the

next period is as the consumer products scale, Meta AI really just launched in a

meaningful way so we don't have any kind of hard stats to share on that.

But I'd say that's the main thing that I'm focused on for this year and probably a lot

of next year is growing that product and the other AI products and the engagement

around them. And I think we should all have quite a bit of confidence that if those

are on a good track to scale, then they're going to end up being very large

businesses. So that's the main point that I was trying to make there.

Operator:

Your next question comes from the line of Brian Nowak from Morgan Stanley.

Please go ahead.

Brian Nowak:

Great, thanks for taking my questions. I have two. The first one is on sort of the

recommendation engine improvements and even, Susan, when you talked about

further opportunities to increase the relevance of the models.

Could you just unpack that a little bit for us? Can you give us examples of where

you're still running the model in a suboptimal basis or opportunities for improved

signal capture use or data you're not using? Where are sort of the areas of

improvement you see from here?

And then the second one, when you talk about driving incremental adoption of AI

tools for advertisers, what are sort of some of the main gating factors you've

9

encountered to get advertisers to test these tools? And how do you think about sort

of addressing that throughout '24 and '25? Thanks.

Susan Li:

Thanks, Brian. So to your first question, where are there more opportunities for us

to leverage and improve our recommendations models to drive engagement.

One of the things I would say is, historically, each of our recommendation products

including Reels, in-feed recommendations, et cetera, has had their own AI model.

And recently, we've been developing a new model architecture with the aim for it to

power multiple recommendations products.

We started partially validating this model last year by using it to power Facebook

Reels. And we saw meaningful performance gains, 8% to 10% increases in watch

time, as a result of deploying this.

This year, we're actually planning to extend the singular model architecture to

recommend content across not just Facebook Reels, but also Facebook's video tab

as well.

So while it's still too early to share specific results, we're optimistic that the new

model architecture will unlock increasingly relevant video recommendations over

time. And if it's successful, we'll explore using it to power other recommendations.

An analog exists, I would say, on the ads side. We've talked a little bit about the new

model architecture, Meta Lattice, that we deployed last year that consolidates

smaller and more specialized models into larger models that can better learn what

characteristics improve ad performance across multiple services, like Feed and Reels

and multiple types of ads and objectives at the same time. And that's driven

improved ad performance over the course of 2023 as we deployed it across

Facebook and Instagram to support multiple objectives.

And over the course of 2024, we expect to further enhance model performance and

include support for even more objectives like web and app and ROAS.

So there's a lot of work that we're investing in, in the underlying model architecture

for both organic engagement and ads that we expect is going to continue to deliver

increasing ads performance over time.

The second question you asked was around getting advertisers to test and adopt

GenAI tools. There are two flavors of this. The more near-term version is around the

GenAI ad creative features that we have put into our ads creation tools. And it's

early, but we're seeing adoption of these features across verticals and different

advertiser sizes.

In particular, we've seen outsized adoption of image expansion with small

businesses, and this will remain a big area of focus for us in 2024, and I expect that

10

Attachments

  • Original Link
  • Original Document
  • Permalink

Disclaimer

Meta Platforms Inc. published this content on 25 April 2024 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 25 April 2024 06:52:11 UTC.