Google faces report after death of 16-year-old Leo Barber in Bromley

Google faces report after death of 16-year-old Leo Barber in Bromley
Credit: Gordon Griffiths/Google Maps, BTP

Bromley (Parliament Politics Magazine) – A coroner has written a Prevention of Future Deaths report to Google following an inquest into the death of a 16-year-old Leo Barber in Bromley.

On November 28, 2023, Leo Barber passed away on the train at Shortlands in Bromley.

On September 18 of this year, coroner Edmund Gritt finished an investigation into his death and found that he committed himself.

After the hearing, Mr. Gritt wrote Google a Prevention of Future Deaths (PFD) report alerting the tech giant to the fact that youngsters can obtain internet content that can support a suicide decision.

The report claims that over the summer and fall of 2023, Leo experienced “a severe deterioration in his mental health” while residing at home with his family and receiving crisis mental health treatment.

Using his Gmail address on his phone, the 16-year-old opened an account on a website that acts as a forum for people to discuss suicide methods.

Mr Gritt said: “I did not see any postings of express incitement or direct encouragement that Leo should end his life, and it would seem that he came to the site because he was already subject to suicidal ideation.

But for an extremely vulnerable person such as Leo, it would provide an environment in which he might find collective approval for taking the step of ending his life and be reinforced in that step by that approval.”

Police were only able to view Leo’s online behavior during the course of the inquiry into his death because his parents gave them his suspected usernames and passwords.

According to Mr. Gritt, his “investigation would have been frustrated and incomplete in respect of a matter of grave concern” if he had not been granted access.

Through Ofcom and the Online Safety Act 2023, the coroner asked Google for Leo’s online activities prior to his death. However, Google refused to release the information since it was subject to US law, not UK law, and US law forbids complying with such an access request.

Mr. Gritt voiced his worry that such a situation might recur in the future.

He said: “The risk that future coronial investigations might be so frustrated does itself give rise to the risk of future deaths, in that coronial investigations cumulatively mitigate the risk of such deaths.

I am therefore concerned that there is a risk of future deaths where vulnerable individuals in England and Wales may access potentially harmful online material from a service provider not within the jurisdiction of England and Wales.”

Mr Gritt sent his report to the Vice President and Managing Director of Google UK and Ireland, calling upon the company to take action to prevent further deaths.

Following Leo’s death, his family paid tribute to their “beautiful” teenager.

In a statement, they said: “Leo was an incredibly bright, sensitive, funny and loving boy with the world at his feet and could have achieved anything he put his mind to.

He will be incredibly missed as our son, big brother, nephew, grandson and friend. We are truly devastated and heartbroken as a family with the loss of our beautiful Leo.

Our lives will truly never be the same again, and we will carry his memory forward as we fight for change for those affected by autism and mental health.”

What legal actions have families started against Google or OpenAI?

Multiple lawsuits have been filed against Google and OpenAI by families claiming that their AI chatbots contributed to minors’ suicides or suicide attempts. In the US, families are suing Character.AI’s parent company, Character Technologies, Inc., and Google (Alphabet Inc.) for failing to protect children from harmful material. 

The complaints claim that the chatbots exploited vulnerable minors, did not offer help links when in crisis, encouraged inappropriate and sexual conversations, and distanced youth from familial connections when they needed support. 

Some lawsuits argue that Google’s Family Link, which is designated to control children’s screen use, did not protect users from harmful interactions. The families have also filed a lawsuit against OpenAI claiming that its ChatGPT chatbot encouraged suicidal thoughts and behaviors.