Silicon Valley's favourite philosophy, longtermism, has helped to frame the debate on artificial intelligence around the idea of human extinction.
But increasingly vocal critics are warning that the philosophy is dangerous, and the obsession with extinction distracts from real problems associated with AI like data theft and biased algorithms.
Author Emile Torres, a former longtermist turned critic of the movement, told AFP that the philosophy rested on the kind of principles used in the past to justify mass murder and genocide.
Yet the movement and linked ideologies like transhumanism and effective altruism hold huge sway in universities from Oxford to Stanford and throughout the tech sector.
Venture capitalists like Peter Thiel and Marc Andreessen have invested in life-extension companies and other pet projects linked to the movement.
Elon Musk and OpenAI's Sam Altman have signed open letters warning that AI could make humanity extinct -- though they stand to benefit by arguing only their products can save us.
Ultimately critics say this fringe movement is holding far too much influence over public debates over the future of humanity.
Longtermists believe we are dutybound to try to produce the best outcomes for the greatest number of humans.
This is no different to 19th century liberals, but longtermists have a much longer timeline in mind.
They look to the far future and see trillions upon trillions of humans floating through space, colonising new worlds.
They argue that we owe the same duty to each of these future humans as we do to anyone alive today.
And because there are so many of them, they carry much more weight than today's specimens.
This kind of thinking makes the ideology "really dangerous", said Torres, author of
Read more on tech.hindustantimes.com