This week in AI: AI ethics continues to deteriorate.

Keeping up with an industry as fast-paced as AI is a difficult task. So, until an AI can do it for you, here’s a quick summary of recent machine learning news, as well as important studies and experiments we didn’t cover on their own.

This week in AI, the news cycle finally (finally!) calmed down in preparation for the Christmas season. But that’s not to say there wasn’t much to write about, which was both a benefit and a curse for this sleep-deprived writer.

This morning, an AP story grabbed my eye: “AI image-generators are being trained on explicit photos of children.” LAION, a data set used to train various famous open source and commercial AI picture generators such as Stable Diffusion and Imagen, features thousands of photographs of alleged child sexual assault. The Stanford Internet Observatory, a monitoring organization located at Stanford, collaborated with anti-abuse groups to discover unlawful content and report the connections to law authorities.

LAION, a non-profit organization, has now removed its training data and agreed to delete the objectionable items before republishing it. However, the event highlights how little effort is being placed into generative AI products as competitive demands increase.

The profusion of no-code AI model-building tools has made it frighteningly simple to train generative AI on any data collection conceivable. Getting such models out the door is a win for both entrepreneurs and big behemoths. However, with a reduced barrier to entry comes the temptation to sacrifice ethics in favor of a faster road to market.

There’s no doubting that ethics is difficult. To take this week’s example, combing through the hundreds of faulty photographs in LAION will take time. And, ideally, creating AI responsibly entails collaboration with all key stakeholders, including organizations that represent populations who are often disenfranchised and negatively affected by AI systems.

There are several instances of AI release choices made with stockholders, rather than ethicists, in mind. Consider Bing Chat (now Microsoft Copilot), Microsoft’s AI-powered Bing chatbot that, at first, likened a journalist to Hitler and ridiculed their looks. ChatGPT and Bard, Google’s ChatGPT counterpart, were still providing obsolete, racist medical advice as of October. And the most recent version of OpenAI’s picture generator, DALL-E, exhibits anglocentrism.

Suffice it to say that damage is being done in the pursuit of AI dominance—or, at the very least, Wall Street’s perception of AI superiority. Perhaps there is some optimism on the horizon with the approval of the EU’s AI legislation, which threatens sanctions for violations of specific AI guardrails. But the path ahead is undoubtedly lengthy.

Here are some additional notable AI stories from the last few days:

expectations for AI in 2024: Devin discusses his expectations for AI in 2024, including how AI may effect U.S. primary elections and what’s next for OpenAI, among other things.

Against pseudanthropy: Devin also proposed that AI be forbidden from mimicking human behavior.
Microsoft Copilot, Microsoft’s AI-powered chatbot, can now write songs due to an interface with GenAI music app Suno.

Rite Aid has been barred from using facial recognition technology for five years after the Federal Trade Commission determined that the drugstore chain’s “reckless use of facial surveillance systems” humiliated customers and put their “sensitive information at risk.”

The EU provides the following computational resources: The EU is extending its plan to promote domestic AI businesses by giving them access to processing capacity for model training on the bloc’s supercomputers, which was announced in September and launched last month.

OpenAI grants the board more powers: To combat the danger of hazardous AI, OpenAI is strengthening its internal safety protocols. A new “safety advisory group” will sit atop the technical teams and offer suggestions to leadership, with veto power handed to the board.

Ken Goldberg of UC Berkeley answers questions: Brian got down with Ken Goldberg, a UC Berkeley professor, company founder, and experienced roboticist, for his monthly Actuator email to discuss humanoid robots and larger developments in the robotics sector.

While CIOs are under pressure to give the kinds of experiences individuals are experiencing when they experiment with ChatGPT online, Ron argues that most are taking a thoughtful, cautious approach to adopting the software for the corporate.

A class action complaint filed by multiple news publishers accuses Google of “siphoning off” journalistic material via anticompetitive techniques, including AI technology such as Google’s Search Generative Experience (SGE) and Bard chatbot.

OpenAI signs an agreement with Axel Springer: Speaking of publishers, OpenAI has signed an agreement with Axel Springer, the Berlin-based owner of newspapers such as Business Insider and Politico, to train its generative AI models on the publisher’s content and add recently published Axel Springer articles to ChatGPT.

Google expands the availability of Gemini: Google has incorporated its Gemini models into a broader range of products and services, including its Vertex AI managed AI dev platform and AI Studio, the company’s tool for creating AI-based chatbots and other similar experiences.

Additional machine learning
Life2vec, a Danish study that combines innumerable data points in a person’s life to forecast what a person is like and when they’ll die, has to be the strangest (and simplest to misread) research of the previous week or two. Roughly!

The research isn’t claiming oracular precision (say that three times fast), but rather that since our lives are the sum of our experiences, those trajectories may be extrapolated to some extent using present machine learning methods. Between childhood, education, employment, health, hobbies, and other characteristics, it is possible to predict not only if someone is introverted or extroverted but also how these aspects may impact life expectancy. We’re not quite at “precrime” levels yet, but insurance firms are salivating at the prospect of licensing this work.

The CMU scientists who created Coscientist, an LLM-based assistant for researchers capable of performing a sizable amount of lab work autonomously, made another noteworthy claim. It is now confined to certain areas of chemistry, but much like scientists, models like this will become experts.

“The moment I saw a non-organic intelligence be able to autonomously plan, design, and execute a chemical reaction invented by humans, that was amazing,” said lead researcher Gabe Gomes. It was one of those ‘holy crap’ moments. ” It basically employs an LLM like GPT-4, fine-tuned on chemical texts, to recognize and conduct common reactions, reagents, and processes. So you don’t have to instruct a lab worker to make four batches of a catalyst – the AI can do it, and you don’t even have to hold its hand.

Google’s AI researchers have also had a busy week, delving into a few intriguing frontier topics. FunSearch may seem like Google for youngsters, but it is really an abbreviation for function search, which, like Coscientist, may produce and assist with mathematical discoveries. Interestingly, this (like others lately) uses a matched pair of AI models similar to the “old” GAN design to avoid hallucinations. The one theorizes, while the other assesses.

While FunSearch will not produce any groundbreaking new discoveries, it will be able to hone or reapply what is already out there; thus, a function that one domain employs but another is ignorant of may be utilized to enhance an industry standard algorithm.

StyleDrop is a useful tool for those who want to imitate certain styles using generative graphics. The problem, according to the researcher, is that if you define a style (say, “pastels”), the model will have too many sub-styles of “pastels” to draw from, resulting in unexpected outcomes. StyleDrop allows you to submit an example of the style you’re after, and the model will base its work on that—it’s essentially super-efficient fine-tuning.

The blog post and paper demonstrate that it’s rather robust, applying a style from any picture, whether it’s a photo, painting, cityscape, or cat portrait, to any other sort of image, even the alphabet (which is famously difficult for some reason).

Google is also making strides in the generative video game space with VideoPoet, which employs an LLM foundation (as does everything these days… What else will you employ?) to do a variety of video activities such as converting text or pictures to video, expanding or stylizing existing video, and so on. As each project demonstrates, the problem here is not merely creating a sequence of pictures that connect to one another, but keeping them cohesive over extended periods of time (like more than a second) and with huge motions and changes.

VideoPoet seems to be moving the ball forward, yet the effects are still strange. But that’s how these things progress: they’re insufficient at first, then strange, then spooky. They must leave uncanny at some point, but no one has been there yet.

On the practical side, Swiss researchers have been using AI models to quantify snowfall. Normally, one would depend on weather stations, but these might be few, plus we have all of this wonderful satellite data, right? Right. As a result, the ETHZ team used publicly available satellite footage from the Sentinel-2 array, but as lead Konrad Schindler puts it, “just looking at the white bits on the satellite images doesn’t immediately tell us how deep the snow is.”

So they sent in terrain data from their Federal Office of Topography (similar to our USGS) for the whole nation and taught the algorithm to estimate not only based on white bits in imaging but also ground truth data and trends like melt patterns. ExoLabs is commercializing the resultant technology, which I’m intending to call to learn more about.

However, Stanford cautions that, as strong as the aforementioned applications are, none of them include much in the way of human bias. When it comes to health, this becomes a major issue, and health is where many AI techniques are being explored. AI models, according to Stanford researchers, transmit “old medical racial tropes.” Because GPT-4 has no idea if anything is genuine or not, it may and does repeat old, disproven assertions about groups, such as black individuals having reduced lung capacity. Nope! If you’re dealing with any form of AI model in health and medicine, be alert.

Eltrys Team
Author: Eltrys Team

Share this article
0
Share
Shareable URL
Prev Post

GM has halted sales of the Chevy Blazer EV due to early software issues.

Next Post

Payment defaults cause nonprofit Code.org to sue Byju’s unit WhiteHat Jr.

Leave a Reply

Your email address will not be published. Required fields are marked *

Read next
Subscribe to our newsletter
Get notified of the best deals on our WordPress themes.