Can you believe that Sundar Pichai discussed Google’s AI, Microsoft’s AI, OpenAI, and yes, AI again?

We are discussing AI in a very practical manner, but much of the conversation revolves around whether it will ultimately bring about a perfect and beneficial society or lead to the downfall of humanity. What is your opinion on these long-term concerns?

AI is a highly significant technology that we will be working on. It carries risks in the short, medium, and long term. It is crucial to address these concerns seriously, but it is also important to allocate resources appropriately based on the stage of development. Currently, advanced language models (LLMs) have issues with generating false information, which can be acceptable in certain situations like coming up with creative names for pets but not when determining the correct dosage for a 3-year-old. Therefore, responsibility lies in testing the safety of AI and ensuring it does not compromise privacy or introduce bias. In the medium term, there is concern about AI either replacing or enhancing the job market, as it can be a disruptive force in certain areas. Additionally, there are long-term risks associated with developing highly intelligent agents. It is essential to ensure that these agents align with human values and remain under our control. In my opinion, all of these concerns are valid.

Did you watch the film Oppenheimer?

I am currently engaged in reading the book. I greatly enjoy reading the book prior to watching its corresponding movie adaptation.

I inquire because you possess significant influence over a potent and potentially hazardous technology. Does the Oppenheimer narrative resonate with you in that regard?

Those of us involved in various forms of advanced technology, such as AI and genetic engineering like Crispr, must take responsibility. It is crucial to actively participate in discussions about these matters and draw lessons from past experiences.

Google is an enormous company. Current and former employees complain that the bureaucracy and caution has slowed them down. All eight authors of the influential “Transformers” paper, which you cite in your letter, have left the company, with some saying Google moves too slow. Can you mitigate that and make Google more like a startup again?

When expanding a company, it is essential to reduce bureaucracy and maintain efficiency. We have experienced rapid progress in various areas such as Cloud, YouTube Shorts, Pixel, and AI-driven search.

However, we still receive these complaints, even from individuals who were once loyal to the company but have since departed.

When managing a large company, there are instances where you may realize that progress has been slower in certain areas, and you make efforts to address and resolve those issues. [Pichai raises his voice.] Do I hire candidates who join us because they have experienced a slow pace of change in other big companies? Absolutely. Are we consistently attracting top talent from around the world? Yes. It is equally important to acknowledge that we have an open culture where people freely express their opinions about the company. Yes, we have lost some employees, but we are also retaining more employees than we have in a long time. Did OpenAI experience some departures from the original team that worked on GPT? The answer is yes. In fact, I have observed that the company has been moving faster in certain areas compared to what I remember from 10 years ago.