Fly On Wall Street

Draft EU AI Act regulations could have a chilling effect on open-source software

IN-BRIEF New rules drafted by the European Union aimed at regulating AI could prevent developers from releasing open-source models, according to American think tank Brookings.

The proposed EU AI Act, yet to be signed into law, states that open source developers have to ensure their AI software is accurate, secure, and be transparent about risk and data use in clear technical documentation.

Brookings argues that if a private company were to deploy the public model or use it in a product, and it somehow gets in trouble due to some unforeseen or uncontrollable effects from the model, the company would then probably try to blame the open source developers and sue them.

It might force the open source community to think twice about releasing their code, and would, unfortunately, mean the development of AI will be driven by private companies. Proprietary code is difficult to analyse and build upon, meaning innovation will be hampered.

Oren Etzioni, the outgoing CEO of the Allen Institute of AI, reckons open source developers should not be subject to the same stringent rules as software engineers at private companies.

“Open source developers should not be subject to the same burden as those developing commercial software. It should always be the case that free software can be provided ‘as is’ — consider the case of a single student developing an AI capability; they cannot afford to comply with EU regulations and may be forced not to distribute their software, thereby having a chilling effect on academic progress and on reproducibility of scientific results,”  he told TechCrunch.

New MLPerf results for inference are out

The results for the annual MLPerf inference test, which benchmarks the performance of AI chips from different vendors across numerous tasks in various configurations has been published this week.

Nearly a whopping 5,300 performance results and 2,400 power measures were reported this year for inference in the datacenter and in edge devices. The tests look at how fast a hardware system is able to run a particular machine learning model. The rate of data crunching is reported in spreadsheets.

It’s no surprise that Nvidia tops the rankings again this year. “In their debut on the MLPerf industry-standard AI benchmarks, Nvidia H100 Tensor Core GPUs set world records in inference on all workloads, delivering up to 4.5x more performance than previous-generation GPUs,” Nvidia gushed in a blog post. “The results demonstrate that Hopper is the premium choice for users who demand utmost performance on advanced AI models.”

Although an increasing number of vendors are taking part in the MLPerf challenge, it can be difficult to get a good idea of the competition. There are no reported results for Google’s TPU chips in the datacenter track this year, for example. Google did, however, seem to ace MLPerf’s training competition earlier this year.

AI artists discovers horrific face lurking behind image

A viral Twitter thread posted by a digital artist reveals just how strange text-to-image models can be beneath the surface.

Many netizens have found joy and despair in experimenting with these systems to generate images by typing in text prompts. There are sorts of hacks to adjust the models outputs; one of them, known as a “negative prompt, allows users to find the opposite image to the one described in the prompt.

When an artist, who goes by the name of Supercomposite on Twitter, found the negative prompt for what described an innocent-looking picture of a fake logo they found something truly horrifying: The face of what looks like a haunted woman. Supercomposite has named this AI-generated woman “Loab” and when they crossed images of her with other ones, they always ended up looking like a scene from a horror film.

Supercomposite told El Reg random images of AI-generated people can often show up in negative prompts. The weird behavior is yet another example of some of the weird properties these models can have that people are only beginning to probe.

No sentient chatbots here at Google says CEO Sundar Pichai

Sundar Pichai contradicted claims made by former engineer, Blake Lemoine, that Google had built a sentient chatbot during his talk at the Code conference this week.

Lemoine made headlines in July when he announced he thought Google’s LaMDA chatbot was conscious and might have a soul. He was later fired for reportedly violating the company’s privacy policies after he hired a lawyer to chat to LaMDA and assess its legal rights, claiming the machine had told him to do so.

Most people – including Google’s CEO – disagree with Lemoine. “I still think there’s a long way to go. I feel like I get into philosophical or metaphysical talks often about what is sentience and what is consciousness,” Pichai said, according to Fortune. “We are far from it, and we may never get there,” he added.

To stress his point further, he admitted that Google’s AI voice assistant sometimes fails to understand and respond appropriately to requests. “The good news is that anyone who talks to Google Assistant—while I think it is the best assistant out there for conversational AI — you still see how broken it is in certain cases,” he said. ®

Exit mobile version