As pressure mounts for states to follow Australia’s prohibition on under-16s using social media, Instagram said that it would alert parents if their teenage child frequently searches for phrases associated with suicide or self-harm.
Following Australia’s December action, Britain announced in January that it was thinking about imposing limits to safeguard youngsters online. In recent weeks, Slovenia, Greece, and Spain have all stated that they are considering restricting access.
Instagram, which is owned by Meta Platforms Inc., announced on Thursday that it would begin notifying parents who have opted into its optional supervision setting if their kids attempt to view content that encourages suicide or self-harm.
Instagram stated that it will start sending out alerts for users who have joined up in the US, UK, Australia, and Canada starting next week. The company’s current policy is to prevent these kinds of searches and link users to support resources.
Concerns over the AI chatbot Grok, which has produced non-consensual sexualized photographs, have prompted governments to take more action to safeguard children from danger online.
Measures taken in Britain to prevent youngsters from accessing pornographic websites have affected the privacy of adults and caused conflict with the United States about restrictions on free speech and regulatory authority.
Instagram’s “teen accounts” for those under 16 require parental consent to modify their settings, and parents can choose an additional level of surveillance with their teenager’s consent.







