GPT-4 is here what scientists think
GPT-4 is here what scientists think
NEWS | 16 March 2023 | by Katharine Sanderson
On 14 march, artificial intelligence company Open AI unveiled GPT-4. Compared with the previous version, GPT-4, the latest iteration, contains big improvement. its upgraded ability to create text, generate images and even computer code from any prompt has attracted science community’s attention. Although the GPT-4 artificial intelligence model is not yet widely available, researchers said that emergency of GPT-4 and constant iteration of model’s prowess have the potential to transform science.
Nevertheless, not everyone argues with the Open AI’s idea and even some of them raise concerns what unknown outcomes will be brought by AI, which range over Block box, Danger prevention, Ethics discussion.
How the model was trained and what data were used, how GPT-4 actually works have still confused scientific researchers since GPT-4 appeared. Sasha Luccioni, a research scientist specializing in climate at HuggingFace reflects that due to these closed-source models, at large, they are essentially dead end in science. Although Andrew White, a chemical engineer at the University of Rochester, New York, has had privileged access to GPT-4 as a “red teamer” to test the platform, after putting the bot queries about the procedures of the chemical reaction, described it was actually not impressed, that things change drastically and new kinds of abilities emerge when he gave GPT-4 access to scientific paper, made he surprised.
Could GPT-4 allow dangerous chemicals or other things disrupting social harmony and stability to be made? The engineers from Open Al indicated that they will fed back advice offered by “red teamers” such as White into the model to discourage GPT-4 from being harmful. Despite all this, without understanding the mystery about how GPT-4 was trained, Open AI’s assurances about safety still fall short for public and it is also impossible to remedy.
Claudi Bockting and colleagues, psychologist at Amsterdam University, argued that earlier this year there is an urgent need to develop a set of ‘living’ guidelines to govern how AI and tools like GPT-4 are used and developed. The legislation and regulation around AI have to struggle to keep up with the pace of its development.
GPT-4 and its iteration will shake up science. Not only a simple issue about technology evolution, but it’s also a cross-cutting topics which is actually going to a huge infrastructure change in science waiting to discuss.