Building Trust in AI and the Future of Digital Identity: Interview with Srikar Varadaraj

In a new episode of the Washington AI Network podcast, Srikar Varadaraj, the founder and CEO of Eternis, a company addressing the challenges of digital identity and creating a trusted execution environment, outlines the protocols needed to build a “verified internet,” where data integrity and user sovereignty are prioritized. He discusses the critical role of AI labs, the adoption hurdles for new standards, and the necessity of open collaboration across industry and government. Varadaraj tells host Tammy Haddad the ultimate goal is that any action that’s digital in nature has a human root of trust.

Varadaraj gave us an example of what autonomous digital actions might look like, What it means for your agent to do stuff on your behalf over the internet and in the real world, and what identity means in this world if most actions are being executed by things that are not human.”

“AI agents will revolutionize the way we interact with technology. Imagine a world where your personal AI can manage your entire schedule, book flights, and even negotiate prices on your behalf. However, these agents will have unprecedented access to sensitive personal data,” said Varadaraj while explaining the importance of user privacy. “For this to work safely, we need systems that prioritize user privacy and transparency. These tools must ensure that the data shared remains under your control and is never misused or compromised.”

Varadaraj further explained the future of digital identity, “This is your data and it belongs to you. It may be of some minor significance now, but when agents are acting on your behalf, they’ll need to be able to prove things about you constantly to external services. The world of applications gets so much richer when you can have arbitrary things being done on your behalf and in this world, proofs become extremely important.”

“Ultimately, the end state should be that humans own their own identity,” said Varadaraj about the future of humans’ trust in AI systems. “Humans are able to give delegated access to AI agents, and you should be able to do this flexibly in a way that’s corresponding to your preferences.”

“What we’ve built is a simple notary network system which runs your browser traffic through what’s called a trusted execution environment. Varadaraj went on to explain the evolution of the internet and where the technology is going, “We’d like to welcome others to join us as well, but this is the evolution of the internet as we see it. Eventually, the protocols that underlie internet traffic themselves might change and that will allow, let’s say what we call the verified internet, but we’re not there yet.” 

“When the internet became popular… people were interacting with it but didn’t necessarily know whether they could trust the information they were seeing or if they were receiving malicious information. The internet collectively had to upgrade to a protocol called TLS (Transport Layer Security).” Varadaraj further explained why we need to create foundational protocols for trust. The reason you’re able to trust the information that’s coming to you now is because this protocol is running. Similarly, I think there will be a host of new protocols that need to emerge to provide implicit trust in AI systems we use daily.” 

When asked about the type of protocols needed to be developed, Varadaraj said, “I think ultimately in order for a verified internet to emerge, you need to interact with these bodies. Where we are is we’ve built some front-end interfaces that demonstrate the usefulness of our tools and we’re moving on to further research problems. For example, let’s say your website and you have a robots.txt file, which most websites do these days.This is basically a file that says, “Hey, robots or bots should pay attention to things that we allow and don’t allow.” How do you force this behavior? Right now, it’s not forcible. You could create a bot that just ignores all of the robots.txt, but that’s just one example of many of how having intelligent agents on the internet will require new protocols. That’s what we’re working on.” 

While explaining the importance of global identity stems, Varadaraj explained, “Every government has a large digital ID program… India is a great example of a government that’s rolled this out at scale. The ultimate goal really is that any action that’s digital in nature has a human root of trust. You know that a human originated this action, and if the action is malicious, then you can trace it back to the human…If we don’t get this unified identity system, a few things become extremely difficult, like the distribution of resources at scale.”

This is what Varadaraj had to say on the timeline for AGI, “Look, I think [Sam Altman] would have more visibility than most. I will say that all my skeptical friends who thought it might not happen during our lifetimes over the past three years have changed their tune. I’ve been thinking that AGI will be here before 2030. Again, these are all estimates that have heavy dependence on how fast AI labs go. If it was just one AI lab doing things, we might have gotten there much later. The competitive pressures between XAI, OpenAI, Anthropic, and all the other smaller labs and research comingout of DeepMind—I think the game theory just doesn’t work out the way of AGI coming much further than 10 years away.”

Varadaraj explained that collaboration is necessary to build the future, “The internet is let’s say a collection of standards that a lot of different people have agreed to run independently, and there are bodies like the ITF that have designed a lot of the things that have gone on to become the internet or reach consensus on, “Hey this is how we should all think of the internet.”I think ultimately in order for a verified internet to emerge, you need to interact with these bodies.” 

When discussing his vision for his company Eternis, this is what Varadaraj had to say, “Our team is mathematicians, cryptographers, and people who believe in this state of humans being the root of trust of any action on the internet. That’s what we care about—making sure the internet is robust and verifiable. By the end, when we get AGI, we can trust the world around us but also have the tools necessary to preserve human agency. I think that is central to our mission.”

Listen to the full episode on Audioboom, iTunes, or wherever you get your podcasts.

Discover more from Washington AI Network

Subscribe now to keep reading and get access to the full archive.

Continue reading