🚀 Gate.io #Launchpad# Initial Offering: #PFVS#
🏆 Commit #USDT# to Share 10,000,000 #PFVS# . The More You Commit, the More $PFVS You Receive!
📅 Duration: 03:00 AM, May 13th - 12:00 PM, May 16th (UTC)
💎 Commit USDT Now: https://www.gate.io/launchpad/2300
Learn More: https://www.gate.io/announcements/article/44878
#GateioLaunchpad#
European Parliament Approves Artificial Intelligence Bill Requiring Disclosure of Copyright on Generative AI Training Data
Source: The Paper
Reporter Fang Xiao
Suppliers of underlying models will be required to declare whether they use copyrighted material to train AI. For technology companies such as Google and Microsoft, the fines could run into billions of dollars for violations.
• The next step is tripartite negotiations involving member states, parliaments and the European Commission. A major area of contention is the use of facial recognition. The European Parliament voted to ban the use of real-time facial recognition, but questions remain over whether exemptions should be allowed for national security and other law enforcement purposes.
On June 14, local time, the European Union's "Artificial Intelligence Act (AI Act)" took an important step towards becoming law: the European Parliament voted to pass the bill, banning real-time facial recognition and imposing restrictions on generative artificial intelligence tools such as ChatGPT. New transparency requirements are introduced.
The AI Bill will now enter the final stages before regulation is launched in the EU. Officials will try to reach a compromise on the draft law with the EU executive and member states, where differences remain. The legislative process must be completed in January if the bill is to enter into force before next year's EU elections.
“This moment is very important,” Daniel Leufer, a senior policy analyst focused on artificial intelligence in the Brussels office of Access Now, told TIME. “What the EU says poses an unacceptable risk to human rights will be seen as A blueprint for the world."
The EU-approved version of the law proposes that any AI applied to "high-risk" use cases such as employment, border control and education must comply with a series of security requirements, including risk assessment, ensuring transparency and submitting logging. The bill would not automatically deem "general purpose" AI such as ChatGPT as high risk, but would impose transparency and risk assessment requirements on "base models," or powerful AI systems trained on vast amounts of data. For example, suppliers of the underlying models, including OpenAI, Google and Microsoft, will be required to declare whether they used copyrighted material to train the AI. However, there is no similar requirement to declare whether personal data is used during training.
** How do these rules work? **
First proposed in 2021, the European Union's "Artificial Intelligence Act" will apply to any product or service that uses artificial intelligence systems.
The bill classifies AI systems based on 4 levels of risk, from minimal to unacceptable. Higher-risk applications, such as recruitment and technology targeting children, will face stricter requirements, including greater transparency and the use of accurate data.
One of the EU's main objectives is to guard against any threat to health and safety posed by artificial intelligence, and to protect fundamental rights and values.
This means that certain AI uses are absolutely prohibited, such as "social scoring" systems that judge people based on their behavior, and AI that exploits vulnerable groups (including children) or uses subconscious manipulation that could cause harm, such as encouraging dangerous behavior interactive dialog tool. Predictive policing tools used to predict who commits crimes will also be banned.
In addition, AI systems used in categories such as employment and education that affect the course of a person's life will face strict requirements, such as being transparent to users and taking steps to assess and reduce the risk of bias posed by algorithms.
Most AI systems, such as video games or spam filters, fall into the low-risk or no-risk category, the European Commission said.
A major area of contention is the use of facial recognition. The European Parliament voted to ban the use of real-time facial recognition, but questions remain over whether exemptions should be allowed for national security and other law enforcement purposes. Another rule would prohibit companies from scraping biometric data from social media to build databases.
On the same day, a group of right-wing lawmakers in the European Parliament tried at the last minute to cancel the real-time facial recognition ban proposed by the bill, but the lawmakers rejected it.
Enforcement of the rules will depend on the EU's 27 member states. Regulators could force companies to withdraw apps from the market. In extreme cases, violations can result in fines of up to 30 million euros (about $33 million) or 6 percent of a company's annual global revenue, and for tech companies like Google and Microsoft, fines can run into billions of dollars.
What is the relationship between ** and ChatGPT? **
The bill's initial measure said little about chatbots, requiring only that they be labeled so users knew they were interacting with a machine. Negotiators later added provisions to cover popular AGI like ChatGPT, making the technology meet some of the same requirements as high-risk systems.
A key addition is that the bill requires thorough documentation of any copyrighted material used to train AI systems to generate text, images, video and music similar to human works. This will let content creators know if their blog posts, e-books, scientific papers or songs have been used to train the algorithms that power systems like ChatGPT. They can then decide whether their work can be copied and seek compensation.
Some experts concerned about the security risks posed by AI models argue that the bill does not place limits on the computing power that AI systems can use. With each new release, the amount of computation used by a large language model like ChatGPT increases exponentially, which will greatly improve its capabilities and performance. "The more computations that are used to train an AI system, the more powerful the AI will be. The greater the capability, the greater the potential for risk and danger." Andrea Mee, director of strategy and governance at AI security startup Conjecture Andrea Miotti told Time.
Miotti noted that it is relatively easy for researchers to measure the total computing power of a system because the chips used to train most cutting-edge AI are a physical resource.
**Why is EU regulation important? **
"Time" pointed out that the EU is not an important player in the development of cutting-edge artificial intelligence. This role is played by the United States and China, but the EU often plays the role of setting the tide. Heralds of corporate power.
The sheer size of the EU's single market, with 450 million consumers, makes it easier for companies to comply, rather than developing different products for different regions, experts say. By setting common rules for AI, the EU is also trying to grow the market by instilling confidence among users.
"This is enforceable regulation and the fact that companies will be held accountable is significant" as places such as the US, Singapore and the UK have provided only "guidance and advice", said Chris Shree, a senior fellow at the Irish Civil Liberties Commission. "Other countries may want to adapt and copy" the EU rules, said Kris Shrisak, the country's chief executive.
Some other countries are also stepping up the pace of regulation. For example, British Prime Minister Rishi Sunak plans to hold a world summit on AI security this fall. "I want the UK to be not just an intellectual home, but a geographic home for global AI safety regulation," Sunak said at a tech conference this week. The UK summit will bring together "academia, business and Government" people, working together on the "Multilateral Framework".
Francine Bennett, acting director of the Ada Lovelace Institute, told The New York Times: "Rapidly evolving and rapidly repurposed technologies are of course difficult to regulate because Even the companies developing the technology don't know how things are going to play out. But it will certainly be worse for all of us if they continue to operate without adequate regulation."
However, the Computer and Communications Industry Association argues that the EU should avoid overbroad regulation that would stifle innovation. Boniface de Champris, the agency's European policy manager, said, "Europe's new AI rules need to effectively address well-defined risks while giving developers enough flexibility to deliver AI applications for the benefit of Europeans."
**What's next? **
It could take years for the bill to fully take effect. The next step is tripartite talks involving member states, parliament and the European Commission, which could face more changes before they try to agree on wording.
During the upcoming phase of the tripartite dialogue, the Council of Europe, which represents member state governments, is expected to strongly advocate exempting AI tools used by law enforcement and border forces from the requirement for "high-risk" systems, according to Ruffo.
The bill is expected to receive final approval by the end of the year, followed by a grace period for businesses and organizations to adapt, usually around two years. But Brando Benifei, an Italian member of the European Parliament who is leading work on the bill, said they would push for rules that allow for faster adoption of fast-growing technologies such as generative AI.
To fill the gaps before the legislation takes effect, Europe and the United States are drafting a voluntary code of conduct that officials promised in late May to draw up within weeks and potentially expand to other "like-minded countries".