Innodata Inc. (NASDAQ:INOD) Q1 2025 Earnings Call: Full Transcript Revealed

Innodata Inc. (NASDAQ:
INOD
) Q1 2025 Earnings Call Transcript May 8, 2025

Innodata Inc. surpasses earning projections with an EPS of $0.22 against predicted figures of $0.17.


Operator:

Good afternoon, ladies and gentlemen, and welcome to the Innodata First Quarter 2025 Results Conference Call. At this time, all lines are in listen-only mode. Following the presentation, we will conduct a question-and-answer session. [Operator Instructions] This call is being recorded on Thursday, May 08, 2025. I would now like to turn the conference over to Amy Agress, General Counsel at Inodata Inc. Please go ahead.


Amy Agress:

Thank you, Lovely. Good afternoon, everyone. Thank you for joining us today. Our speakers today are Jack Abuhoff, CEO of Innodata, and Marissa Espineli, Interim CFO. Also on the call today is Aneesh Pendharkar, Senior Vice President, Finance and Corporate Development. We’ll hear from Jack first, who will provide perspective about the business, and then Marissa will follow with a review of our results for the first quarter. We’ll then take questions from analysts. Before we get started, I’d like to remind everyone that during this call, we will be making forward-looking statements, which are predictions, projections or other statements about future events. These statements are based on current expectations, assumptions and estimates and are subject to risks and uncertainties.

The actual outcomes might vary significantly from what is suggested by these forward-looking statements. Various factors can lead to differences in these outcomes, detailed in the Risk Factor sections of our recent Forms 10-K, 10-Q, along with other SEC filings found in today’s earnings press release. We do not plan to provide updates for such forward-looking data. During this discussion, we also refer to specific non-GAAP financial metrics. For further clarification and reconciliation between these non-GAAP figures and their corresponding GAAP equivalents, please consult the supplementary materials included in our SEC filing released earlier today, available alongside other documents on our website.

Thank you. I will now turn the call over to Jack.


Jack Abuhoff:

Thank you, Amy, and good afternoon, everyone. Our Q1 2025 revenue was $58.3 million, a year-over-year increase of 120%. Our adjusted EBITDA for the quarter was $12.7 million or 22% of revenue, a 236% year-over-year increase. We finished the quarter with $56.6 million of cash, which is a $9.7 million increase from last quarter. Our $30 million credit facility remains undrawn. We’re pleased with our financial results this quarter, which by the way, came in ahead of analyst revenue estimates. But what’s even more exciting is the meaningful progress we’ve made on our strategic growth initiatives, much of it in just the past few weeks. I’d like to take this opportunity to walk you through the progress we’re making across four of our most dynamic solutions areas, highlighting how we’re aligning with the evolving customer needs and how these efforts are driving, both new customer wins and meaningful account expansions.

Let’s first look at the work we do collecting and creating generative AI training data. We are very focused on building progressively more robust capabilities to feed the progressively more complex data requirements of large language models, as they advance toward artificial generalized intelligence or AGI, and eventually, artificial superintelligence or ASI. We have made and we continue to make investments toward expanding the diversity of expert domains like math and chemistry for which we create LLM training data and perform reinforcement learning, while also investing in expanding languages like Arabic and French within these domains and creating the kind of data required to train even more complex reasoning models that can solve difficult multi-step problems within these domains.

We’re also developing progressively more robust capabilities to collect pre training data at scale. The advancements that we have made and continue to make and the investments we have made and continue to make have enabled us to gain traction with both existing customers and potential new customers. I’ll take potential new customers first. We’re in the process of being onboarded by a number of potentially significant customers. I’m going to share four of them with you now. The first is a global powerhouse building mission critical systems that power everything from multinational finance and telecommunications to government operations and cloud infrastructure. It is integrating large language models and AI across its cloud infrastructure and enterprise applications to enhance automation, productivity and decision making, and also embeds generative AI directly into horizontal and vertical applications.

The second is a cloud software company that is revolutionized the way businesses manage customer relationships and it is leveraging large language model and AI to enhance customer relationship management and enterprise operations and is taking leadership position in launching in launching agentic AI capabilities to autonomously handle complex enterprise tasks. The third is a Chinese technology conglomerate that operates one of the world’s largest digital commerce ecosystems. It has built its own family of LLM models incorporating hybrid reasoning capabilities and supporting multiple modalities including text, image, audio and video. Its models are widely in use for a variety of horizontal applications as well as industry specific applications.

And the fourth is a global healthcare company that is a leader in advanced medical imaging, diagnostics, and digital health solutions. It is actively integrating LLMs and AI to enhance diagnostics, streamline clinical workflows, and improve patient outcomes, developing foundation models capable of processing multimodal data including medical images, records, and reports. Now when it comes to existing customers, we’re seeing major expansion opportunities, some we’ve already won and others we expect to win in the near term. I’ll share a few examples to illustrate the kind of traction we are now seeing. I’ll start with three of our big tech customers, which until recently were relatively small accounts for us, but which are now showing signs of meaningful expansion.

I’ll also touch on the continued strong momentum we’re seeing within our largest customer. The first example is a customer we started working with in the second quarter of last year. Now in 2024, we recognized only about $400,000 of revenue from them. But today, by contrast, we have late-stage pipeline that we value as having the potential to result in more than $25 million of bookings this year and continued growth over the next several years. This customer is one of the most valuable software companies in the world. The problem we are helping them solve is that their generative AI, both text and image, has not been doing a good job handling very specific, detailed and complex problems. They’ve shared with us that improving on these fronts was critical in order to improve product experience and provide a foundation for multimodal reasoning and agentic models of the future.

Here’s a prime illustration of an investment yielding tangible outcomes for our client. We crafted a cutting-edge data creation process allowing subject matter specialists to generate comprehensive hierarchical content tags spanning various formats, all while refining the foundational classification systems continuously. This method caters to numerous generative AI processes such as extensive explanations, inverse prompts, and precise assessments.
Another instance showcasing significant growth from an existing clientele involves a large technology firm where we generated $200,000 in revenue last year. In stark comparison now, we’re engaged in active collaboration with them leading to two fresh victories so far this quarter—both contracts amounting to roughly $1.3 million in prospective earnings. One agreement is sealed, whereas another seems poised for signing soon.

We’ve got yet another promising lead with these folks, which we estimate could bring in around $6 million. I’ll provide further details shortly. Moving on to our third case: a major technology company known for its comprehensive generative AI services spanning their consumer and business sectors. They deliver core models alongside specially designed hardware tailored for artificial intelligence tasks. Our expectation is that they will engage us soon to assist with collecting training data for distinct niche models. Additionally, within the next few moments, I’ll outline how we plan to broaden our engagement here regarding model security and assessment. Lastly, consider one of the leading research facilities focused on generative AI; recently, we inked a fresh agreement with them centered on data acquisition worth roughly $900,000. Discussions are underway that might see this figure increase twofold.

Pretraining data collection in the form of curated text corpora, as well as multimodal datasets remains a cornerstone for big tech companies racing to build next generation LLMs. As models grow more sophisticated, their performance hinges not just on raw computational power, but also on the breadth, depth and quality of the data they are trained on. Continuous data acquisition enables the models to better understand nuance, context, and intent across languages and domains. We believe that each of the companies I just mentioned is likely budgeting several hundred million dollars per year on generative AI data and model evaluation. So, the traction we are now seeing is super exciting and is very much the result that we have been working toward under our business plan.

Finally, we have identified growth prospects with our biggest client. Just today, we finalized a second master statement of work (SOW) with them. This agreement aims to facilitate the delivery of generative AI services supported by a different funding stream within their company, independent of what funds our current projects. We estimate this new budget allocation to be significantly higher.
To get ready for delivering these services under the updated SOW, we’re investing in tailoring our specialized large language model (LLM) data annotation tools precisely for tasks outlined in this contract. Additionally, we’re developing supplementary service support infrastructure.
Another key priority for us revolves around crafting autonomous AI solutions tailored not only for our major technology clients but also for enterprises across various sectors.

In one of our partnerships with a major tech company—one which was mentioned earlier—we’ve initiated a collaboration focused on both AI agent dataset development and AI agent construction. This quarter, we aim to launch efforts with them involving the creation of roughly 200 conversationally adept and independent agents spanning various fields. Our tasks include outlining specific applications, compiling artificial knowledge databases, producing training datasets through demonstrations, constructing and troubleshooting these agents, and overseeing their coordinated operation. We estimate that this partnership could initially be valued at about $6 million. We see agent-centric AI as pivotal in unlocking the complete benefits of extensive language models and generative AI for businesses, turning what were once potent yet standalone tools into scalable, autonomously functioning systems capable of reasoning, executing actions, and delivering quantifiable business impacts.

Agent-oriented AI pertains to intelligent systems capable of independently starting and completing intricate tasks aimed at specific objectives with limited continuous human intervention. Unlike merely reacting, these systems demonstrate purposeful conduct, making choices, adapting to evolving situations, and proactively working towards desired results. Contrary to conventional AI models that generally react to commands or queries, agent-oriented AI is engineered for greater self-sufficiency, handling multi-stage procedures, navigating ambiguity, and modifying behaviors according to received inputs. This evolution signifies a transition from using AI as an instrument to partnering with it—a counterpart able to grasp aims, strategize effectively, and execute plans cohesively.

In recent months, we secured contracts valued at around $1.6 million with what is currently recognized as one of the globe’s biggest social media corporations. Our role involves integrating generative artificial intelligence into their engineering processes. Additionally, talks are ongoing regarding extending these initiatives across various departments within the same company. Currently, we offer comprehensive support through system integration, rapid response mechanisms, project oversight, and hands-on advisory roles concerning the deployment of generative AI technologies.
To date, we’ve successfully streamlined five operational procedures, projecting potential cost reductions amounting to roughly $6 million for them. By the conclusion of 2025, our aim is to mechanize close to 60 out of an initial set of ninety defined tasks. This ambitious target anticipates generating further financial efficiencies worth over $10 million annually. Moreover, such advancements promise ancillary advantages including smoother workflow transitions and expedited product iterations—facilitating quicker prototyping, testing phases, and refinement cycles among technical staff members.

We are engaged in advanced talks with multiple firms regarding their utilization of generative AI to improve both product offerings and operational efficiency. Additionally, we’ve explored how our investments and enhanced capabilities in large language model (LLM) training data production and autonomous AI are driving an increase in customer interaction. This positive trend extends to our efforts in ensuring trustworthy and secure applications within generative AI, thereby broadening our footprint in this rapidly growing sector which is crucial for business success. Excitingly, we can share that we have secured additional contracts from one of our current major technology clients—though they aren’t among our biggest partners—to conduct assessments related to trustworthiness and security. These projects collectively could generate around $4.5 million annually as anticipated recurrent income.

We began increasing our engagement activities a few weeks back. Our work will involve multiple departments within the organization, covering various languages including English, Spanish, German, and Japanese. Throughout these projects, we expect to continuously test both their publicly available models and those still under development—beta versions included. This includes evaluating general-purpose models alongside industry-specific ones. As an illustration, one task could be ensuring that a model designed for use by chemists and nuclear physicists does not enable instructions related to constructing bombs or manufacturing crystal meth. Once again, demonstrating our readiness and expertise was crucial in securing this chance. To enhance our capabilities, we integrated advanced features into our distinctive trust and assessment system, which caught the interest of our client.

Last week, the customer finished their security assessments of our platform, allowing us to begin operations this week. This opens up possibilities for expanding our trust and safety initiatives with them soon. Over the coming months, we plan to run paid trials for additional trust and safety processes. To capitalize on this chance, we have developed methods using predictive analytics from sophisticated AI language models. These tools can help identify risky interactions before they become issues for trust and safety evaluations. Our recent presentation of this technology was met with great excitement from the client. Additionally, as part of these efforts, we’re examining how large language models function within actual hardware like gadgets and robots; our team will collaborate closely with clients in their laboratories to assess the effectiveness of these systems firsthand.

With another enterprise customer, one that I mentioned earlier, we have been shortlisted as lead vendor for a multiyear program aimed at evaluating the customer’s generative AI foundation models for potential harms, bias and robustness. We anticipate the annual recurring revenue of this engagement is one to be approximately $3.3 million. We are currently conducting proofs of concept that encompass adversarial testing, model probing, and early stage fine tuning pipelines. The proposed production scope includes comprehensive red teaming, implementation of guardrails, and rigorous evaluation of model behavior across text, image, video, and audio outputs. In the first quarter, we introduced our generative AI test and evaluation platform as NVIDIA’s GTC 2025.

This enterprise grade solution is designed to assess the integrity, reliability, and performance of large language models across the full development lifecycle, from pre-deployment refinement to post-deployment monitoring, enabling both internal operational use cases and external customer facing applications. MasterClass served as our inaugural charter customer, and we are now in active discussions with several additional high-profile enterprises with diverse generative AI deployments. In addition, we are in active discussions with one of the world’s leading global consulting firms, regarding a potential go-to-market partnership that would position them as a strategic distribution and implementation channel for our platform. From a competitive differentiation standpoint, the platform encapsulates a range of advanced techniques developed through our ongoing services engagements with leading big tech customers.

These capabilities are now productized into an autonomous system that allows enterprises to benchmark, evaluate, and continuously monitor their agents and foundation models. The platform supports evaluation against high quality standardized benchmarks across key safety dimensions, including hallucination, bias, factual accuracy, and brand alignment, while also enabling customization through client specific safety vectors and proprietary evaluation criteria. A key feature of the platform is its continuous attack agent, which autonomously generates thousands of adversarial props and conversational probes to uncover vulnerabilities in real time. Detected issues are flagged for review, allowing customers to take swift remedial action. Recommended mitigation strategies may include tailored system message design and the generation of supplemental fine-tuning datasets.

Currently, this platform can be accessed via an early adoption program designed specifically for enterprise clients; full release is expected by the end of Q2. Evaluating trust and security is paramount during both the creation phase and when the system goes live with larger language models. Throughout development, extensive tests such as aggressive red-teaming play a crucial role in identifying weaknesses, prejudices, and potential hazards prior to deployment. By taking preemptive action like integrating safeguards directly within the design and refinement procedures, creators enhance overall reliability. Once operational, ongoing assessments guarantee compliance with established safety protocols even as interactions become more dynamic and varied over time. Collectively, these strategies are indispensable for maintaining responsible functioning, reducing risks effectively, and sustaining public confidence on a broad scale concerning LLMs.

We anticipate that the swift uptake of agentic and multi-agent systems will propel us into a more complex era concerning issues of trust and security. According to their latest quarterly financial statements, the big tech firms known as the ‘magnificent seven’—Apple, Microsoft, Amazon, Alphabet, Meta, Nvidia, and Tesla—are all emphasizing significant investments in generative AI, seeing this technology as crucial for driving future expansion. Microsoft has declared intentions to spend around $80 billion on AI infrastructure throughout fiscal year 2025 with an aim to construct specialized data centers capable of managing tasks related to artificial intelligence. Meanwhile, Meta has adjusted its projected spending upwards from $64 billion to $72 billion for 2025, highlighting greater outlays towards enhancing AI infrastructure, which includes developing advanced AI tools like Llama 4 along with launching a dedicated AI companion application.

Amazon is broadening its artificial intelligence expertise, notably through its cloud services unit, AWS. The company’s CEO highlighted during their yearly shareholder communication the firm’s substantial commitment to investing in AI technology, stating, “We remain convinced that AI represents a transformative shift across all aspects of what we undertake.” Meanwhile, Alphabet announced a 20% boost in operational earnings along with a significant 46% jump in overall profits for the first quarter of 2025, crediting these improvements largely to their comprehensive strategy encompassing AI hardware, software models, and application development. Considering this perspective and recognizing the importance of major tech giants like the Magnificent Seven as well as other leading international corporations to our financial inflows, we feel confident that fluctuations in market trends or changes in import/export regulations will likely not significantly affect our future opportunities.

It’s noteworthy how optimistic seasoned venture capitalists are regarding our industry. We hear that our primary rival might soon complete a secondary share offering valuing their company at approximately $25 billion—a figure equivalent to 29 times their reported revenue of $870 million last year, despite an announced EBITDA deficit of $150 million. For today, we’re restating our forecasted annual revenue increase of more than 40%. Given the extent of activities throughout our operations, we feel confident this present drive will propel us toward sustained robust outcomes. Moving forward, let me discuss our strategy for managing the business over the coming few years. Our aim is to capitalize on expansion through increasing market penetration as well as boosting contributions from key clients.

I’ve shared with you today how we are achieving significant success with the diversity of large customers that we believe could become material contributors over the coming fiscal periods. At the same time, we also see significant growth potential with our largest customer. We believe this customer will continue to expand its overall relationship with us and we are deeply aligned with its long-term roadmap. Given that we intend to drive growth from this broadening customer footprint and our largest customer at the same time, we intend to embrace customer concentration as a natural part of our evolution. Many leading technologies companies have seen similar patterns, an early period of customer concentration followed by a broad-based growth, as the value proposition matures and adoption scales.

We believe we are following that same path and remain confident in our ability to continue executing with discipline, while building a durable, diversified revenue engine. Inevitably, customer concentration can result in quarter-to-quarter volatility. For example, with our largest customer, we exited 2024 at an annualized revenue run rate of approximately $135 million. In Q1, we were running higher than this by about 5%, and in Q2, we anticipate that we could be lower by about 5%, but the customers’ demand signals are updated continually and are highly dynamic. Going forward, we do not intend to provide granular updates at a customer level. Our 2025 financial plan reflects our conviction in the scale of the opportunity ahead. We believe we are well-positioned to drive business with an increasingly diverse group of leading big tech companies and enterprises and become a market leader in one of the most transformative technology cycles in decades.

Accordingly, we intend to reinvest a meaningful portion of our operating cash flow into product innovation, go-to-market expansion and talent acquisition, while still delivering adjusted EBITDA above our 2024 results. This too is an intentional strategy aimed at capturing long-term value in a rapidly growing and strategically important market. I’ll now turn the call over to Marissa to go over the financial results, after which Marissa, Aneesh and I will be available to take questions from analysts.


Marissa Espineli:

Thank you, Jack, and good afternoon, everyone. Revenue for Q1 2025 reached $58.3 million, representing a year-over-year increase of 120% and demonstrating strong momentum to start the year. Adjusted gross margin was 43% for the quarter, up from 41% in Q1 of last year. As we’ve discussed previously, we target an adjusted gross margin of around 40%. So, we’re pleased to have exceeded that benchmark to begin the year. Our adjusted EBITDA for Q1 2025 was $12.7 million or 22% of revenue compared to $3.8 million in the same quarter last year. Net income was $7.8 million in the first quarter, up from $1 million in the same period last year. We were able to utilize the benefits of accumulated net operating losses or no call in Q1 to partially offset our tax provision.

Looking ahead, barring any changes in the tax environment, we expect that our tax rate in the coming quarters to be approximately 29%. Our cash position at the end of Q1 2025 was $66.6 million, up from $46.9 million at the end of Q4 2024 and $19 million at the end of Q1 2024, reflecting strong profitability and disciplined cash management. We still have not drawn on our $30 million of Wells Fargo credit facility. The amount drawable under this facility at any point in time doesn’t mean based on the borrowing-based formula. We’ve been actively engaged in investor relation activity over the past year and expect to build on that momentum in the months ahead. We’ll be participating in several upcoming investor conferences and non-bill roadshows to continue to increase awareness and deepen relationships with institutional investors.

As Jack pointed out, we’re looking at focused investments to enhance our offerings. These efforts include ongoing funding for technological advancements aimed at aiding both present and future clients through their artificial intelligence endeavors. Additionally, we aim to boost strategic recruitment within our sales and solutions teams to fuel sustained expansion over time. For the second quarter, we intend to allocate roughly $2 billion toward supporting an updated agreement along with connected initiatives involving our biggest client, as previously highlighted by Jack. While these expenditures are anticipated before generating corresponding income, they may briefly affect profit margins during that period. However, we view this expenditure strategically; it positions us effectively to address shifting customer demands and capitalize further on the successful groundwork laid down thus far with these clients.

Just as usual, we will maintain discipline in handling our cash and expenditures, while still investing in areas with significant potential returns and substantial long-term benefits for our shareholders. That concludes my part; thank you everybody. Great, now we can open up for questions.

To proceed with the Q&A session, kindly continue below.

click here

.

You May Also Like