Artificial intelligence (AI) technology is developing at high speed, and is transforming many aspects of modern life.
However, some experts fear that it could be used for malicious purposes, and may threaten jobs.
AI allows a computer to act and respond almost as if it was a human.
Computers can be fed huge amounts of information and trained to identify the patterns in it, in order to make predictions, solve problems, and even learn from their own mistakes.
As well as data, AI relies on algorithms – lists of rules which must be followed in the correct order to complete a task.
This video can not be played
Media caption,
Watch: What is artificial intelligence?
The technology is behind the voice-controlled virtual assistants Siri and Alexa. It lets Spotify, YouTube and BBC iPlayer suggest what you might want to play next, and helps Facebook and Twitter decide which social media posts to show users.
AI lets Amazon analyse customers' buying habits to recommend future purchases – and the firm is also using the technology to crack down on fake reviews.
Two powerful AI-driven applications or apps which have become very high profile in recent months are ChatGPT and Snapchat My AI.
They are examples of what is called "generative" AI.
This uses the patterns and structures it identifies in vast quantities of source data to generate new and original content which feels like it has been created by a human.
The AI is coupled with a computer programme known as a chatbot, which "talks" to human users via text.
The apps can answer questions, tell stories and write computer code.
But both programmes sometimes generate incorrect answers for users, and can reproduce the bias contained in their source material, such as sexism or racism.
With few rules currently in place governing how AI is used, experts have warned that its rapid growth could be dangerous. Some have even said AI research should be halted.
In May, Geoffrey Hinton, widely considered to be one of the godfathers of artificial intelligence, quit his job at Google, warning that AI chatbots could soon be more intelligent than humans.
Later that month, the US-based Center for AI Safety published a statement supported by dozens of leading tech specialists.
They argue AI could be used to generate misinformation that could destabilise society. In the worst-case scenario, they say machines might become so intelligent that they take over, leading to the extinction of humanity.
However, the EU's tech chief Margrethe Vestager told the BBC that AI's potential to amplify bias or discrimination was a more pressing concern.
In particular she is concerned about the role AI could play in making decisions that affect people's livelihoods such as loan applications, adding there was "definitely a risk" that AI could be used to influence elections.
Others, including tech pioneer Martha Lane Fox, say we shouldn't get what she calls "too hysterical" about AI, urging a more sensible conversation about its capabilities.
Governments around the world are wrestling with how to regulate AI.
Members of the European Parliament have just voted in favour of the EU's proposed Artificial Intelligence Act, which will put in place a strict legal framework for AI, which companies would need to follow.
Margrethe Vestager says "guardrails" are needed to counter the biggest risks posed by AI.
The legislation – which is expected to come into effect in 2025 – categorises applications of AI into levels of risk to consumers, with AI-enabled video games or spam filters falling into the lowest risk category.
Higher-risk systems like those used to evaluate credit scores or decide access to housing would face the strictest controls.
These rules will not apply in the UK, where the government set out its vision for the future of AI in March.
It ruled out setting up a dedicated AI regulator, and said instead that existing bodies would be responsible for its oversight.
But Ms Vestager says that AI regulation needs to be a "global affair", and wants to build a consensus among "like-minded" countries.
US lawmakers have also expressed concern about whether the existing voluntary codes are up to the job.
Meanwhile, China intends to make companies notify users whenever an AI algorithm is being used.
AI has the potential to revolutionise the world of work, but this raises questions about which roles it might displace.
A recent report by investment bank Goldman Sachs suggested that AI could replace the equivalent of 300 million full-time jobs across the globe, as certain tasks and job functions become automated. That equates to a quarter of all the work humans currently do in the US and Europe.
The report highlighted a number of industries and roles that could be affected, including administrative jobs, legal work, architecture, and management.
But it also identified huge potential benefits for many sectors, and predicted that AI could lead to a 7% increase in global GDP.
Some areas of medicine and science are already taking advantage of AI, with doctors using the technology to help spot breast cancers, and scientists using it to develop new antibiotics.
close Video Fox News Flash top headlines for December 21 Fox News Flash top headlines…
"Rockin' around the Christmas treeAt the Christmas party hop" – Brenda Lee It’s a yuletide…
President-elect Trump announced Saturday he has tapped the creator of "The Apprentice" to serve a…
As the dust settles on Congress frantically passing a stopgap bill at the eleventh hour…
President-elect Trump dropped his latest round of nominations Saturday afternoon, naming two picks to help…
The White House has announced that President Biden signed a stopgap funding bill into law…