X

E2E Networks Ltd (E2E) Q3 2025 Earnings Call Transcript

E2E Networks Ltd (NSE: E2E) Q3 2025 Earnings Call dated Jan. 22, 2025

Corporate Participants:

Tarun DuaManaging Director

Megha RahejaWhole Time Director and Chief Financial Officer

Analysts:

Soumya ChhajedAnalyst

Unidentified Participant

Amar MauryaAnalyst

Ashwin KediaAnalyst

Garvit GoyalAnalyst

Abhishek ShindadkrAnalyst

Hardik GandhiAnalyst

Sumit JaiswalIndividual Investor

Hardik SathyaIndividual Investor

Presentation:

Operator

Ladies and gentlemen, good day and welcome to E2E Networks Limited Q3 FY25 earnings conference call hosted by Go India Advisors. As a reminder, all participant lines will be in the listen only mode and there will be an opportunity for you to ask questions after the presentation concludes. Should you need assistance during the conference call, please signal an operator by pressing the star then zero on your touchstone phone.

I now hand the conference over to Ms. Soumya from Go India Advisors. Thank you. And over to you.

Soumya ChhajedAnalyst

Thank you, Steve. Good evening everyone. We welcome you to E2E Networks Limited Q3FY25 Results Con Call. We have with us on the call today Mr. Tarun Dua, Managing Director, Ms. Megha Raheja, the whole time Director and Chief Financial Officer and Mr. Ronald Gabbar, the Company Secretary. I must remind you that the discussion on today’s call may include certain forward looking statements and must be viewed in conjunction with the risk that the company may face.

I now request Mr. Tarun Dua to take us through the company’s business and financial highlights subsequent to which we’ll open the floor for Q and A. Thank you. And over to you, sir.

Tarun DuaManaging Director

Thank you, Soumya. Good afternoon everyone. Thank you for joining us for the E2E Networks Q3 FY25 earnings call. I hope you are all doing very well. We are happy to share our journey and provide an update on the steps that we are taking to sustain our growth and strengthen our position in the cloud, GPU and AI/ML workload space.

As you are all aware, last quarter we achieved a strong operational momentum driven by key advancements deployed in our Cloud infrastructure Our cumulative deployments have now grown to nearly 700H100 GPUs, 256H200 GPUs and around 700 non H100 non H200 GPUs and we also successfully raised another round of more than thousand crores from LNC for expanding our accelerated cloud infrastructure and focusing on the next generation cloud GPUs and GPU clusters this quarter, one of the most important developments for us has been deepening of our strategic partnership with lnt, a collaboration that marks a significant milestone in our journey towards revolutionizing AI and cloud infrastructure in India.

With LNT’s deep expertise in data center management and our advanced AI compute infrastructure and cloud experience, both the companies can jointly offer robust and scalable solutions to the enterprise, government and organization. This collaboration will open new revenue streams, drive growth in high demand AI services and we will also improve our operational efficiency leading to enhanced profitability and revenue. Together, we will leverage each other’s strengths to capture a larger market share and create a more powerful presence in the AI and cloud sector. Each quarter we continue to stay ahead of the curve by embracing latest advancements in cutting edge GPU technology. In line with our growth plan, we are also expanding our data center capacity from 4.2 megawatts to nearly 10.2 megawatts which is a move which is supported by our recent fundraise.

We continue to innovate at a rapid pace to expand our battle tested scalable high performance cloud infrastructure for education, research enterprises, government and startups. The surge in demand for cloud and AI services has helped us achieve strong growth and strengthened we have strengthened our position in the market by leveraging our advanced AI cloud platform. We are committed to driving India’s digital transformation, empowering government and enterprise initiatives and positioning India as a global leader in AI and cloud innovation.

So our platform supports a wide range of cloud native services today including CPU and GPU environments, virtual machines, native containers, serverless architecture mode. We offer flexible high performance storage solutions across object storage across block storage, parallel file systems and we also offer like advanced networking, load balancing firewalls and relational Database Services, NoSQL Database Services, Vector data services. We have continued to develop our AI ML platform called teas which is designed for data scientists and developers to streamline their AIML workload. Similar like training, inference and model endpoint deployment. With an early mover advantage in the AI ML space since 2020, we continue to offer a superior price to performance ratio, helping customers scale without any long term commitment to our cloud. As the AI market, especially generative AI rapidly expands, we are positioned to help businesses capitalize on AI’s transformative potential, filling the demand supply gaps in India and supporting countries digital evolution.

Now I would like to hand over our call to CFO, Megha, who will briefly touch upon the financial and operational highlights of the quarter under review. Over to you Megha.

Megha RahejaWhole Time Director and Chief Financial Officer

Thank you Tarun and good afternoon everyone. Let me first start by giving you some of the key financial highlights. I will summarize the performance of Q3FY25 for Q3FY25 the total revenue stood at INR416 million which witnessed a substantial growth of 73.7% on year. On year basis. EBITDA for the quarter is 246 million INR which further provides a growth of around 119% on year on year basis. EBITDA margin for the current quarter is 59% which demonstrate a growth of 1.2-3 bips year on year. Pratt is reported at 116 million which demonstrate growth of 108% on year on year basis. PAT margin for December quarter is 27.8% and diluted EPS is 7.03 for the quarter which is around 86.5% on year on year increase.

Now we can do a quick comparison with last quarter I.e. September 24th we have witnessed total revenue decline of 12.6% on quarter and quarter basis from 476 million to 416 million in the current quarter. EBITDA for the current quarter is 246 million as against 314 million on the previous quarter. As a result of this, PAT for The current quarter is 116 million, exhibiting quarter on quarter decline of 4.3% from Q2 FY25. While our outlook, overall outlook on AI compute infrastructure in all forms remains very strong, there has been some dip in the revenue numbers as compared to previous quarter. So the decline in revenue and growth in the current quarter is mainly due to churn and downscaling of training deployment which are, as you know, busty in nature. And the revenue factor deprovisioning has tended to be concentrated during this quarter due to concentration and due to our smaller scale. We anticipate the effects of burstive training workloads would eventually become muted at a larger scale and then we intend to build cloud GPU computing infrastructure. We anticipate a resurgence in demand for advanced AI solutions over medium term.

Now important update as well. During 2024 we had raised a total of 14849 billion through preferential issue of equity shares. Out of this we have utilized INR1508 million till Q3 ’25 and we have balance of INR13,349 million as of December 31st.

That concludes the update for the quarter. And now we can open the floor for question answer session.

Questions and Answers:

Operator

Thank you very much. We will now begin the question and answer session. Anyone who wishes to ask a question may press star and one on their touchstone telephone. If you wish to remove yourself from the question queue, you may press star and 2. Participants are requested to use handset while asking a question. In order to ensure that the management is able to answer questions from all participants, please limit your questions to two per participants. If you have any further questions, you may please fall back in the question queue. Ladies and gentlemen, we will wait for a moment while the question queue assembles. The first question is from the line of ASTHA from PK Day Advisors. Please go ahead.

Unidentified Participant

Hi, am I audible?

Operator

Yes, ma’am.

Unidentified Participant

Yeah, so ma’am, the first question is that what was our capex deployed in Q3 FY25 and how much capex we plan to do ahead in Q1, Q2, Q3 of FY26?

Megha Raheja

So we have deployed 2017 million, that is 201 crores.

Unidentified Participant

And for the next year, ma’am.

Tarun Dua

So we are kind of like very agile in terms of like how we deploy the capex. So we are anticipating that like majorly based on the pipeline we will evaluate and based on that like we will kind of like do just in time deployment as much as possible over next several quarters. And based on that, like the outlook is not based on a plan but like practically by looking at the end of the quarter that like what was the capex that was done. So it would be based on the demand outlook that we kind of like come up with.

Unidentified Participant

Okay. Okay. So my second question is, I understand the answer was given earlier, but I missed it. I wanted to understand what was the reason for a drop in quarter on quarter revenue.

Tarun Dua

Sure. We currently have a fairly small scale and compared to that, like some of our larger customers, their deployments were quite large. Now our goal obviously is to kind of like build a much larger infrastructure which is spread across like much larger footprint of larger customers. Now training workloads by their very nature are bursty, which means that typically although you don’t expect them like okay, at what time does a training get over? Or like a particular data science group decides that okay, the training needs are over for now. So unfortunately all of these were concentrated during the Q3 and we have also seen somewhat muted demand for the end of this quarter. December typically is a slow month where we have not been able to kind of like recover back that unutilized infrastructure to a higher utilization rate. So that is the current status today.

Unidentified Participant

Okay, got it.

Tarun Dua

So obviously in the medium term we expect that like as the footprint of customers grows larger then the effect of like one or two large customers declining their training needs like should not like impact us. So that is subject to growth of our overall scale.

Unidentified Participant

Understood. Okay, that’s it. Thank you.

Operator

Thank you. The next question is from the line of Amar Maurya from Lucky Investments. Please go ahead.

Amar Maurya

Yeah, am I audible too? Hello?

Tarun Dua

Yes. Yes Amar, we are able to hear you.

Amar Maurya

Hi. Hi. Hi sir, a couple of questions from my side. First if you can help us understand this lnt tire, I mean how it is going to help us in long run in terms of gaining some large enterprise clients. That is number one. Number two is in terms of our current, I think we have something around 4,200 megawatts. So what is the revenue potential of this capacity and by when we will be reaching that kind of peak utilization level. And in that how much of the percentage would be enterprise versus the, you know, the retail? These are two questions.

Tarun Dua

These are three questions actually. Okay, sure, sure. So your first question is like about the LNT tires, how it would help us in the long run. See obviously a small company, when they tie up with a big company like you get the advantage of the size and scale and connections that a bigger company brings in and also their own capabilities in terms of technology as well as infrastructure. So over here, like we obviously intend to leverage all of those things. It’s quite early days. I think it’s been like practically like, I think like less than a few weeks.

So basically where we have started interlocking, started working together and jointly exploring opportunities with the customers. So in the long run I believe that like, the interlock in terms of like jointly exploring opportunities would obviously help both the businesses who bring their own strength to the table. And we’ll obviously be pursuing a lot more what we internally call as like long cycle business opportunities compared to working with SMEs or smaller companies. The long cycle business opportunities are obviously long term in nature and more rewarding in terms of lifetime value. And smaller companies obviously make the decisions fast. That has been the mainstay of our businesses, our business till now.

I think that natural change will come over like medium term where we start working with larger customers in conjunction with our partners, including LNT. And I think that is one of the major changes that we are going to see where we are pursuing more enterprise business apart from what we continue to pursue in the SME and startup and other organizations that we have continued to do.

Now in terms of megawatt capacity, like that’s a measure of the data center capacity, like how much IT load you can put in. Typically these are fairly well known numbers in the industry which are a point in time. Numbers like that one megawatt of what it pertains to today, like, would vary between like say $25 million to $50 million over a spectrum of say a smaller player and like a large hyperscaler. So each megawatt would represent like different revenue levels from those for those. So it could vary anywhere between let’s say $10 million to $50 million. And it is also dependent on the kind of workloads.

Now regarding percentage of enterprise versus retail I think like it would be very hard to predict today but my assumption is that like mostly wherever in the industry there are like large revenues, large profit pools obviously they are the ones who invest the maximum amount of money into compute.

So what I believe is that like there would be a shift from smaller companies to bigger companies in terms of overall concentration of our revenue over the medium and long term. Now would be very hard to say like how the percentages would vary but the kind of capabilities we are building today are very focused on meeting the requirements of the larger enterprises.

Amar Maurya

Thank you sir, I’ll come back.

Tarun Dua

Sure, sure.

Operator

The next question is from the line of Ashwin Kedia from Alchemy Capital Management. Please go ahead.

Ashwin Kedia

Tarun, can you hear me? Thank you for time. Just one question was how has your revenue percentage changed from training to instance quarter on quarter and how do you expect that to change moving forward for the next year?

Tarun Dua

See the major concentration of the revenue obviously has been training will continue to be training. I think training will continue to play a very major role I think like for the foreseeable future. That being said like of course the impact of large burst training workloads is what we have seen in this quarter but as we scale up I think that effect would be less and less pronounced.

So I think it’s the function of us being in our very early days of our journey as of today which is what we are seeing. I think that is probably the question you are trying to ask. So like the other part being that like I believe that ultimately it’s going to be somewhere the ratio eventually like over the medium and long term I think it should become like more like 50:50 or maybe 60:40 in favor of like training.

Ashwin Kedia

And what is your inference revenue today? Percentage of your total revenue?

Tarun Dua

We don’t really like track it today very separately because there is like a very nebulous line between building training infrastructure for foundational models being built from the ground up versus foundational models being fine tuned at what level. So like the graduation from training to inference is like kind of like there is a whole range so you can’t really classify what constitutes training versus inference. So inference and production like would continue to remain smaller like even in the, in the global context itself. Like so ultimately it’s the training. So today we are like all the AI is like in its early phase.

So for significant future like training is going to constitute of majority of revenue for not just for us but overall in the cloud ecosystem like training would constitute the majority of the revenue.

Ashwin Kedia

Okay, thank you.

Operator

Thank you. The next question is from the line of Garvit Goyal from Nvest Analytics Advisory. Please go ahead.

Garvit Goyal

Am I audible?

Tarun Dua

Yes, yes.

Garvit Goyal

Can you help me understand our geographical concentration particularly in data center business like North America is the biggest market for the data center and if interest rate remain elevated, hyperscalers like Meta Microsoft may scale down the capex which in return will impact our business. So can you put some color on it like is that going to happen or otherwise what your take on it?

Tarun Dua

So I can’t really comment on like what large global players are doing like from the view from whatever we are like obviously all of us are hearing in the industry news is that like the investment into AI is definitely going up now regarding data centers themselves, like so we rely on data centers only in India. So we have one facility in the north in the Delhi NCR region and we are in the process of establishing the second facility near in the Chennai, Greater Chennai area. And like so geographically from a infrastructure point of view we are concentrated in India.

From a customer point of view, primarily the customers we tend to serve are in India now over the medium and long term. Obviously we will continue to look for opportunities to expand out of India into other markets as well. And this may necessarily not be on the public cloud, but these opportunities could be in the form of software and services and support as well.

Garvit Goyal

Okay. Thank you.

Operator

The next question is from the line of Ketan Kapasi from Taurus Investments. Please go ahead.

Unidentified Participant

Hello.

Operator

Yes, you’re audible. Please go ahead.

Unidentified Participant

Yeah. Sir, what will be the impact of the company of this new GPU rule proposed by the US government?

Tarun Dua

In the short term I don’t think there would be any immediate impact. So one is of course there is like about like a more than a period of a quarter before these regulations come into effect. Secondly, based on what India is doing today in terms of overall volume currently we are not really hitting the limits that are being placed and there are like certain exceptions which we have not fully studied. But like there are exceptions around certain end users who are consuming like up to 1700 GPUs, them not being counted in the overall country limit. There would be companies based in the US itself when they bring their infrastructure to India potentially they not being counted.

So overall like I think like that impact assessment would be more like I. Our belief is that like that should be assets like in about like two year time frame rather than on an immediate basis.

Unidentified Participant

Thanks.

Operator

Does that answer your question?

Unidentified Participant

Yes, yes, yes.

Operator

The next question is from the line of Keshav from Nimesha. Please go ahead.

Unidentified Participant

Yeah, hi, thanks for the opportunity. I hope I’m audible. Hello.

Tarun Dua

Yes, please go ahead.

Unidentified Participant

Yeah, so as I can see that your non H1 hundreds and H2 hundreds has seen an increase of 100 GPUs. So is it like that the demand for the H200 and H100 are currently not much. So that is why you have deployed.

Tarun Dua

The environment continues to remain strong. So the, the pipeline is there Obviously for both H100 as well as H200. What we are potentially kind of like missing is like immediate and closures that we were seeing in the past. So I think like the sales cycles have grown. I don’t think the demand has gone away. Like I think the sales cycles have become longer. I think that is what we are seeing.

Unidentified Participant

Okay, so like the S1 hundreds and S200, are they completely utilized or they are underutilized as of now?

Tarun Dua

No, no, they are underutilized as of today.

Unidentified Participant

So how much that would be the percentage if you have that number.

Tarun Dua

It’s a fluctuating number because like a lot of workloads in the cloud, it’s very, very hard to measure like a point in time, very, very easily. So — but we have reasonable capacity on both sides, both being consumed as well as being made available to the customer.

Unidentified Participant

Okay. And like while we are like sitting on a like pile of cash, so we see like the utilization has only been 150 crores. So how much time you think that the funds will gonna take time to utilize or like how much is the capex cycle?

Tarun Dua

If you can comment on that, like commit to kind of like spend that money in a market where the infrastructure, infrastructure like the high tech equipment like GPUs and CPU, they get like upgraded versions. So it is very, very important not to kind of like build a lot of inventory of what would potentially kind of like will get superseded by a newer version. Although these are all long cycle products, they will sell for seven, eight years. But it is also important to have dry powder for the latest version. So that has always been our strategy that like always be able to invest for every version.

So that way we will continue to kind of like evaluate the demand and continue to kind of like try and keep the utilization high but not have so little capacity idle that like we are not able to service new demand. So it’s a balance that we have to draw out. I think like that balance will get established and I think those future capex cycles like they will obviously continue to get established and when we look back we will be able to say with certainty that okay, this was the capex that was spent as opposed to trying to predict today.

So we are reactive, not predictive. So that is what we’ll continue to do. Like overall outlook for both medium term and long term. We continue to see that the AI transformative potential and its adoption is definitely picking up.

Unidentified Participant

Okay, got it. So my last question is Nvidia has recently launched bits, I guess their new product, if I’m recollecting the name correct. So is that a threat to the, you know, cloud GPU since the device is really small…

Tarun Dua

In the compute market you have to understand that like there would be multiple approaches to all various problems. So obviously there they would have. Nvidia would have seen gaps in terms of what can be serviced from the cloud for micro units. See the edge devices have always existed. So if you look at edge compute that has always existed. Now they built a slightly more powerful edge machine. It does not take away the need for data center in any way. I think that this would be like one more thing that would add to the ecosystem, I don’t think, like it’s a zero sum game that like whatever you are able to do on the edge, there would be still further more workloads on the cloud.

Unidentified Participant

Okay, so technically like what is the difference? If you could explain it a bit.

Tarun Dua

See, like there’s a whole range of devices that kind of like do a bit of AI and this is a slightly more powerful edge. So when you talk about what you can do in the cloud, it’s a lot more number of things compared to what you can do on the edge. See, it’s not just about being able to do an inference or a training in isolation. You also need access to vector dbs. You also need access to relational databases. You also need access to a lot more storage. You also need redundancy. You also need reliability. You also need security. So there is certain things that you can do on your laptop, certain things you can do on your mobile, certain things that you can do on workstations, certain things that you will be able to do on a device which is very powerful, required for one particular organization or one particular business unit. And there would be a lot more things that you will contribute to on the cloud. So it’s not like really comparative in terms of like a straightaway movement of workloads from here to there.

Unidentified Participant

Okay, got it. That answers my question. Thank you so much.

Operator

The next question is from the line of Akshay from CD Integrated Services. Please go ahead.

Unidentified Participant

Hello sir. Congress on the good set of numbers. Sir, my first question is what, what is our sourcing strategy for the GPUs? Do we source GPU from the Nvidia and then we do add some value like we may cover RK architecture or we do source it from the companies like HP or network technologies.

Tarun Dua

Okay, so that’s a good question. So GPUs do not work in isolation, obviously. So see, I think like before the hopper architecture, during the MPR architecture, the a Hundreds and a Thirties and a Forties etc, like there was a very large prevalence of GPUs being deployed in the PCI form factor where you typically bought the GPUs and deployed them in compatible servers which were typically on the hardware compatibility list of major vendors like Nvidia or AMD or Intel. So that used to be the approach. Today it’s a lot more integrated approach where a vendor like a Dell or HP or a network or a Supermicro. I think there are like, I think 10 to 15 such vendors who build GPU servers. So now today is the approach of like, kind of like buy a complete integrated hardware which is a design which is done by a principal like an Nvidia or an AMD or an intel. And the major bill of material also comes from an Nvidia or Intel or amd.

Currently of course, like we are very majorly focused on Nvidia. That does not preclude us to work with other players in the future. And we buy like these integrated servers. Typically that is called the SGX line, that’s the major one. Some of the other GPUs like L40s, L4, they continue to follow the older model of being able to kind of like deploy PCI cards into boxes which typically come without GPUs. So we do that as well. But these are less powerful GPUs. Typically the more powerful GPUs come in their pre built integrated boxes from the major GPU server vendors.

Unidentified Participant

Okay sir, okay, very well understood. And sir, my second question is what is the competitive intensity in our kind of business? Like do we face intense competition from the companies like multinational companies like Amazon Web Services and Google Cloud?

Tarun Dua

As AI has become very, very mainstream since 2019 20, since we have been working on AI, the competition has become far more intense today. And competition in the compute and technology world has always been intense. So that has definitely increased. So typically there are a lot more players today who are doing the cloud GPU infrastructure. As the market has expanded, there is space for even more number of players. I think that competitive intensity and the market both will continue to expand.

Unidentified Participant

Okay sir, so what is our competitive edge? Like? If any player, if any company would have to choose from the other company like ours, then what is our competitive edge and why they will choose us?

Tarun Dua

See, like everyone will build their niche solutions which are targeting particular solutions for data science teams, particular industry solutions and everyone will eventually find their niche to succeed in. So again, like I said, these are early days. We are, we continue to learn from our customers, we continue to learn from the ecosystem and we have an R&D team which has the ability to move very fast and build and we have built like a lot of intellectual property that is integrated into our cloud and this will continue to play a big role in terms of differentiation.

Unidentified Participant

Okay sir. And sir my last question is on the financial side. So…

Operator

Mr. Akshay, can you please fall back in the queue for further questions?

Unidentified Participant

Sure. Thank you, sir.

Operator

Thank you so much. The next question is from the line of Abhishek Shindadkr from InCred Capital. Please go ahead.

Abhishek Shindadkr

Hi. Thanks for the opportunity and congrats, Tarun, for a good 3Q. My first question is, did 3Q play out as you had anticipated or were there any positive, negative surprises?

Tarun Dua

Sorry, I didn’t understand the question.

Abhishek Shindadkr

So did the 3Q in terms of, you know, both revenue margins, in terms of ramp up, did it play out as we anticipated or there was a negative surprise? And if yes, it was towards the end of the quarter, towards the middle of the quarter, any color, in terms of, you know, how the revenue played out for us in 3Q?

Tarun Dua

In terms of like when you are a cloud operator, like all these things basically, like you, like, we have maintained, like I think like over last many, many calls that like we react to every situation, we don’t try to predict any situation. So basically, like nothing surprises us, basically. So like we operate on the principle that like, we have the ability to react to anything.

Abhishek Shindadkr

Understood, Understood. And regarding, you know, your mention about these bursty workloads, was it, was it part of, you know, the change was among our top large customers?

Tarun Dua

Yeah, yeah, yeah, yeah. So obviously these are the larger customers. That’s why the impact is visible. If it was smaller customers, then we wouldn’t have seen the impact. Now our whole philosophy over here is that eventually as we scale up our infrastructure, as we scale up our customer base with larger customers, larger deals, so eventually each single large customer will not impact us that much. And that is definitely the path that we continue to follow.

Abhishek Shindadkr

Understood. Just last one data point, in terms of depreciation, we saw a significant jump in the depreciation. How should I read this in the context of, you know, the hardware, especially the GPU numbers that you’ve shared between the last quarter and this quarter, where the increase is only in the a hundred v hundred. So how should I read that depreciation number?

Tarun Dua

Like not, not very easy to answer that question. So how do we read depreciation number, I guess like I can hand that question over to Megha to kind of understand it better and answer it better.

Abhishek Shindadkr

The idea to understand here is that did, did any of our assets were built? I mean were in, you know are part of the depreciation but yet not build. Maybe one reason could be the bursty workloads. But any — yeah.

Megha Raheja

Depreciation is charged on SLM basis over a period of six years which is the life as per the company’s act as well. So once we do additions in a particular quarter then depreciation will continue over a straight line method over period of six years.

Abhishek Shindadkr

Understood. Got it. Thank you. Thanks for taking my question.

Operator

Thank you. Ladies and gentlemen, please limit your questions to one per participant as there are several people waiting for their turn. The next question is from the line of Pankit Shah from Narrow Wealth. Please go ahead.

Unidentified Participant

So actually I wanted to understand on the platform side like what are we doing on the platform side which will say differentiate us or which will make our customer acquisition journey more smoother something on the cloud side like…

Tarun Dua

Yeah, yeah. So there would not be any one single thing which will have the major impact. But having an integrated platform that works seamlessly across multiple functions and having a good user experience for the data scientists, for DevOps, for developers and incorporating a lot of abilities that are being derived from AI into the platform, I think that is the key to success. And essentially it’s like any software adoption cycle where kind of like you keep taking the product feedback from the customers, from the industry, from the market and continue to build at a rapid pace. So I think that creates a sustainable long term advantage with the platform.

Unidentified Participant

So working on this like an end to end as you’re saying. So is it like currently very limited, few players are there or how should we look at this?

Tarun Dua

See like one, one key advantage for our platform obviously it has been in continuous operation for past 10 years now. So that’s one of the decade long experience of running some very critical services has made the entire platform like very battle tested. So that core platform continues to be very robust and secure. And we continue to build on the same principle and continue to focus both on the reliability, scalability and the features that are required for our customers.

Unidentified Participant

Okay. Actually I was trying to understand on the integration side the software capabilities where we can like move from more like a capex LED business to more like a software. Software business will be more recurring in nature over a longer term.

Tarun Dua

So like I didn’t really understand what is the question over here. Like the cloud consists of of like all these things basically. So cloud is a catch all term for like basically like what data center provides to you, what physical hardware in terms of server switching infrastructure, storage infrastructure that gets deployed, all of that gets integrated with the software and like all these abilities.

Obviously there are multiple delivery approaches towards these abilities. Like so it could be in the form of a public cloud, it could be in the form of a private cloud, it could be in the form of on premise infrastructure or eventually it could be in the form of like some services. So like all of them constitute like various parts of the cloud infrastructure. Like you can say that like various ways of looking at like the same thing. And software obviously constitutes like a key piece in all of this.

Operator

Thank you sir. The next question is from the line of Hardik Gandhi from HPMG Shares and Securities. Please go ahead.

Hardik Gandhi

Hello sir. Am I audible?

Tarun Dua

Yes.

Hardik Gandhi

So just two questions from my end. The first question I just wanted to like two timelines. First you mentioned that we are doing a data center expansion. So I just wanted to know by when are we planning on executing that. And the second part of the same question is that we’ve applied for a tender in government AI project, right. So any update or any time where we can expect an answer on that platform?

Tarun Dua

We are trying to kind of like build a second new location as quickly possible in the south near Chennai. And second like basically like whatever is happening in India AI, I think like that is highly visible and public information. So as we kind of like see any impact due to that, like we’ll obviously inform all the stakeholders. So currently I think what has happened is that like there has been a technical evaluation where we have qualified a technical evaluation. Now the second part is the financial bid will — have been opened but like they have not been. The L1s have not been declared as yet. So that is the current status at India AI. So like once the India AI team declares like okay, these are the elements we have received, I think at that point of time they would ask for elements to be matched by players who want to be impaneled and then I guess like some of the players could get in panel.

Hardik Gandhi

Understood. So but we mentioned over there. Yeah, so but you did not mention the timeline for the data center, the Chennai one. When would that be operational?

Tarun Dua

We are trying to do it as soon as possible. I guess like during the next quarter it would get operationalized. Whether we will be able to deploy all the services in the first quarter itself like remains to be seen. Our effort would be to kind of like get as many services as we have in the current location to be also made available in the second location.

Hardik Gandhi

Understood sir. And I could see a huge amount of over other income. Is that a one time income or we’ve. Can you please explain on that front?

Tarun Dua

I think majorly it’s a. It’s the, the treasury income for — from the recent fundraises. My understanding.

Hardik Gandhi

Understood. And just one last bit. Last month we had this. So last month I can see the RPU has reduced.

Tarun Dua

Larger customers have. Some of the larger customers on the training side have churned or downscaled. So that has resulted in the decline.

Hardik Gandhi

So when we say we are going to get back to normal, that will normalize over a longer period of time. Right. So how, how long are we expecting? Are we, are we expecting these numbers to remain in 7?

Tarun Dua

We can’t put very sharp timelines over there. Like obviously like the AI industry, the AI infrastructure, compute, all of it continues to expand. We continue to build the capabilities that we are seeing are needed by our customers and we continue to convince newer and bigger customers of our capabilities. And I think like it would be hard to put a timeline on like when that starts to show up in the numbers.

Hardik Gandhi

Okay. So on the safer side we can take this as a conservative number and continue or do you expect much more churning going in the short term?

Tarun Dua

See like I said that like we are not really predicting anything but like obviously okay, yeah we are reactive rather than build and expand. So I think we see them as a part of the journey. These are like not unexpected event in terms of like training workloads being churned. So the increase in the sales cycle, I think like, that is something that like we probably didn’t anticipate. But I think like over a period of time that increased sales cycle, like gets mitigated by doing more effort on the sales, by expanding in parallel the number of conversations that you are having.

Hardik Gandhi

Understood. So thank you so much for answering the questions. Have a good one.

Operator

Thank you. The next question is from the line of Amay from Ambit Capital. Please go ahead.

Unidentified Participant

Hi, two questions quickly here. You spoke about workloads. They’re more on the training side. I want to understand, are the workloads lower on the inference side due to a, the demand from the customers, or is it because of the capabilities of E2E? That’s the first question.

And the second question is, obviously, we’ll try to increase your data center capacity. With a lot of COLO capacity set to hit in the next two, three years, would you prefer utilizing a COLO capacity in the future, keeping the asset light model, or would you prefer building out your own data center?

Tarun Dua

Sure. So we obviously have always preferred not to build physical data centers. So like, we’ll continue to rely on collocation. That is one. with regard to inference and training. Like, I don’t think like, like we are like a fully capable player in terms of like, anyone who has tested our platform to run like inference, like, they obviously have the ability to run inference on us.

Inference by its very nature, like the, it grows slowly as like the adoption of AI grows in terms of like the adoption by the end customers of enterprises. So the initial volumes required by inferencing is always small and then that scales up over a period of time. So and also inferencing obviously, like it starts much smaller than typical training workload. So but over a period of time, obviously we expect that like there would be a normalization between like training and inference to be somewhere between 40, 60 in favor of training or 50, 50.

Unidentified Participant

So just so that I get this right, my understanding is your training happens during your development phases and inferences where you actually…

Tarun Dua

It’s a continuous process. Although like training teams could take a break where they say that okay, sometimes they would be running multiple trainings in parallel. So trainings do tend to get downscaled for some period. So for example, typically if you look at December period, when a lot of training teams would also be taking some vacation. So you would probably not start like I’m just guessing. This is just all a conjecture. So like the training teams could decide that. Okay, it’s like the year end, so let’s end the trainings that we were previously running and come back next year and then redo those trainings.

There could be of course other product related reasons. I’m just making a conjecture over here. So ultimately like training would not be like a 24 into 7 activity or 365 day activity. It would be more like maybe like 9 months out of 12 months kind of an activity. But yes, training during 12 minutes, like it doesn’t really go away because you’re always building newer features for your customers or improving whatever you are doing today.

Unidentified Participant

Great. Thank you so much.

Operator

Thank you. The next question is from the line of Par Podar from VMPL. Please go ahead.

Unidentified Participant

Hello sir, I have a small query. If we see the revenue part from last year, I mean last year and last quarter. So the depreciation part and the other income part is very high in revenue it’s very high. And again expenses are very high. We compare with the last year. So I am not able to digest. I mean how, how I mean are we are having plan for the revenues so that we can match the current like last quarter the share price reached to 5,000 rupees. The current is trading at 3,500. So are we comfortable or are we confident enough that we will reach that price again?

Tarun Dua

No comments. We never comment on the share price.

Unidentified Participant

Okay sir, but it’s still the revenue part. We can leave the share price aside. The revenue that…

Tarun Dua

I mean we kind of like don’t predict the revenue, we react to the revenue. So in the sense that like we are obviously trying to build like a lot of scale of infrastructure and capabilities and over the medium term, long term obviously AI is a very large market with a large number of players. And I think that market will continue to grow and medium to long term obviously we see that we’ll continue to grow.

Unidentified Participant

Okay, thank you.

Operator

Thank you. The next question is from the line of Sumit Jaiswal, an Individual Investor. Please go ahead.

Sumit Jaiswal

Good evening. Hello.

Tarun Dua

Yeah, hi Sumit ji.

Sumit Jaiswal

Hi. Recently I just went through your PPT and I have been following the India AI mission is the government to procure the 10,000. So my question is how you are looking the progress going ahead and not about this year, I mean the next two, three, four years that the government is trying to make the democratized GPU and AI tourism.

Tarun Dua

See that’s a medium to long term outlook, like obviously that outlook is very bright. Like so there is a, obviously a government focus on expanding the role of AI in the Indian economy. And as a country we should not be left behind. And I think like we are seeing that in enterprises as well where everyone is trying to figure out like how to — and there are implementations of AI that like a lot of enterprises are already working on. So like definitely that’s the overall outlook.

Like so India AI mission, obviously it is a net positive for our entire AI industry in India where it would help in terms of expanding the overall market regardless of whoever kind of like becomes the biggest beneficiary of India mission.

Regardless of that, the market would certainly expand because of the existence of AI mission and the budgetary support from the government over there. And overall like we continue to maintain a very positive outlook for AI compute infrastructure and AI services in India.

Operator

Thank you. The next question is from the line of Ashwin Kedia from Alchemy Capital Management. Please go ahead.

Tarun Dua

Yeah, welcome back.

Ashwin Kedia

Hello, can you hear me?

Tarun Dua

Yes, yes.

Ashwin Kedia

Yeah. I’m curious, what is your planned capex for the next quarter or next two quarters? And have you all placed an order for the Blackwell GPUs from Nvidia yet? Or what’s the strategy with acquiring those GPUs?

Tarun Dua

So like obviously the plan is to kind of like begin with like a immediate number of Blackwell GPUs. Currently we have not placed the orders for Blackwell GPUs, although we have a lot of conversations going on for acquiring the Blackwell GPUs. And as we kind of like plan the initial capacity, it will obviously kind of like keep all the stakeholders informed about that and yes, absolutely. Like we do intend to build like significant amount of capacity on the Blackwell range as and when it becomes available.

Ashwin Kedia

Is there any planned capex before that outside of the Blackwell range, on the hopper range in the meantime?

Tarun Dua

So like there could be like reactive capex in the hopper range. Like so it would depend on kind of like the outlook that is coming from the sales conversations that we are having.

Operator

Thank you. The next question is on the line of Hardik Sathya, an Individual Investor. Please go ahead.

Hardik Sathya

Good evening, Tarun.

Tarun Dua

Yeah, hi Hardik ji.

Hardik Sathya

Yeah, so on the India AI side, there was a technical requirement of having thousand GPUs have made available within six months of timeframe.

Tarun Dua

We already have more than a thousand GPUs. So that is the requirement we practically meet out of the box.

Hardik Sathya

Should we assume that is already part of your ecosystem and that is being consumed by others right now? Whenever you get this tender through or so you will have it immediately ready or again, we have process…

Tarun Dua

We have enough flexibility, enough flexibility to be able to kind of like significantly expand our capacity as and when needed.

Hardik Sathya

So it can be procured in a shorter notice with all the…

Tarun Dua

Absolutely.

Hardik Sathya

And the second question on the Chennai data center, any particular reason why we are having that second location given that our current capacity is not fully…

Tarun Dua

We have one location, one major location in Delhi and PR or two major locations in Delhi and PR which are kind of like joined together with like a big link and we have a smaller location in Mumbai. Now Chennai of course is like, kind of like gives us access to a very different seismic zone, very different type of connectivity compared to what Mumbai and Delhi have. So the landing stations in the south would be significantly different from the landing stations you get in Delhi and Mumbai. So in that way like in all respects like Chennai or Bangalore was a good choice to have for the secondary or rather the second location, not a secondary location. Both of these locations would be for the primary workload of every possible variety.

Operator

Thank you, sir. Ladies and gentlemen, due to time constraint, this was the last question for today’s conference call. I now hand the conference over to the management for the closing comments.

Tarun Dua

Yeah. Thank you everyone for listening to our conference call and thank you for your question. We continue to look forward to working with all of you over the long term. Thank you everyone.

Operator

On behalf of Go India Advisors LPP, that concludes this conference. Thank you for joining us. And you may now disconnect your lines.

Related Post