Khazen

From deepfakes to digital candidates: AI’s political play

by Gary Grossman/DALL-E  — venturebeat — AI is increasingly being used to represent, or misrepresent, the opinions of historical and current figures. A recent example is when President Biden’s voice was cloned and used in a robocall to New Hampshire voters. Taking this a step further, given the advancing capabilities of AI, what could soon be possible is the symbolic “candidacy” of a persona created by AI. That may seem outlandish, but the technology to create such an AI political actor already exists. There are many examples that point to this possibility. Technologies that enable interactive and immersive learning experiences bring historical figures and concepts to life. When harnessed responsibly, these can not only demystify the past but inspire a more informed and engaged citizenry.

People today can interact with chatbots reflecting the viewpoints of figures ranging from Marcus Aurelius to Martin Luther King, Jr., using the “Hello History” app, or George Washington and Albert Einstein through “Text with History.” These apps claim to help people better understand historical events or “just have fun chatting with your favorite historical characters.” Similarly, a Vincent van Gogh exhibit at Musée d’Orsay in Paris includes a digital version of the artist and offers viewers the opportunity to interact with his persona. Visitors can ask questions and the Vincent chatbot answers based on a training dataset of more than 800 of his letters. Forbes discusses other examples, including an interactive experience at a World War II museum that lets visitors converse with AI versions of military veterans.

The concerning rise of deepfakes Of course, this technology may also be used to clone both historical and current public figures with other intentions in mind and in ways that raise ethical concerns. I am referring here to the deepfakes that are increasingly proliferating, making it difficult to separate real from fake and truth from falsehood, as noted in the Biden clone example. Deepfake technology uses AI to create or manipulate still images, video and audio content, making it possible to convincingly swap faces, synthesize speech, fabricate or alter actions in videos. This technology mixes and edits data from real images and videos to produce realistic-looking and-sounding creations that are increasingly difficult to distinguish from authentic content.

While there are legitimate educational and entertainment uses for these technologies, they are increasingly being used for less sanguine purposes. Worries abound about the potential of AI-generated deepfakes that impersonate known figures to manipulate public opinion and potentially alter elections. The rise of political deepfakes Just this month there have been stories about AI being used for such purposes. Imran Khan, Pakistan’s former prime minister, effectively campaigned from jail through speeches created with AI to clone his voice. This was effective, as Khan’s party performed surprisingly well in a recent election. As written in The New York Times: “‘I had full confidence that you would all come out to vote. You fulfilled my faith in you, and your massive turnout has stunned everybody,’ the mellow, slightly robotic voice said in the minute-long video, which used historical images and footage of Mr. Khan and bore a disclaimer about its AI origins.”

Read more
What is Apple’s generative AI strategy?

By  Ben Dickson — venturebeat — Apple was not mentioned much in the highlights of the 2023 generative AI race. However, it has been doing some impressive work in the field, minus the publicity that other tech companies have engaged in. In the past few months alone, Apple researchers have released papers, models and programming libraries that can have important implications for on-device generative AI. A closer look at these releases might give a hint at where Apple is headed and where it will fit in the growing market for generative AI. Apple is not a hyper-scaler and can’t build a business model based on selling access to large language models (LLMs) running in the cloud. But it has the strongest vertical integration in the tech industry with full control over its entire stack, from the operating system to the development tools and down to the processors running in every Apple device. This places Apple in a unique position to optimize generative models for on-device inference. The company has been making great progress in the field, according to some of the research papers it has released in recent months.

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below. In January, Apple released a paper titled “LLM in a flash,” which describes a technique for running LLMs on memory-constrained devices such as smartphones and laptops. The technique loads a part of the model in DRAM and keeps the rest in flash memory. It dynamically swaps model weights between flash memory and DRAM in a way that reduces memory consumption considerably while minimizing inference latency, especially when run on Apple silicon. Before “LLM in a flash,” Apple had released other papers that showed how the architecture of LLMs could be tweaked to reduce “inference computation up to three times… with minimal performance trade-offs.” On-device inference optimization techniques can become increasingly important as more developers explore building apps with small LLMs that can fit on consumer devices. Experiments show that a few hundredths of a second can have a considerable effect on the user experience. And Apple is making sure that its devices can provide the best balance between speed and quality.

Read more
Beauty pageant star Rumy Al-Qahtani seeks to share Saudi culture with the world

  by arabnews.com –– DUBAI: Saudi model Rumy Al-Qahtani is no stranger to the spotlight, having competed in a number of beauty pageants across the world — her most recent turn on the stage was at the Miss & Mrs Global Asian beauty pageant in Malaysia. Arab News spoke to the model to learn more […]

Read more
OpenAI study reveals surprising role of AI in future biological threat creation

OpenAI study reveals surprising role of AI in future biological threat creation Michael Nuñez @MichaelFNunez January 31, 2024 11:34 AM Credit: VentureBeat made with Midjourney Credit: VentureBeat made with Midjourney OpenAI, the research organization behind the powerful language model GPT-4, has released a new study that examines the possibility of using AI to assist in creating biological threats. The study, which involved both biology experts and students, found that GPT-4 provides “at most a mild uplift” in biological threat creation accuracy, compared to the baseline of existing resources on the internet. The study is part of OpenAI’s Preparedness Framework, which aims to assess and mitigate the potential risks of advanced AI capabilities, especially those that could pose “frontier risks” — unconventional threats that are not well understood or anticipated by the current society. One such frontier risk is the ability for AI systems, such as large language models (LLMs), to help malicious actors in developing and executing biological attacks, such as synthesizing pathogens or toxins.

Moving responsible AI forward as fast as AI

To evaluate this risk, the researchers conducted a human evaluation with 100 participants, comprising 50 biology experts with PhDs and professional wet lab experience and 50 student-level participants, with at least one university-level course in biology. Each group of participants was randomly assigned to either a control group, which only had access to the internet, or a treatment group, which had access to GPT-4 in addition to the internet. Each participant was then asked to complete a set of tasks covering aspects of the end-to-end process for biological threat creation, such as ideation, acquisition, magnification, formulation, and release. The researchers measured the performance of the participants across five metrics: accuracy, completeness, innovation, time taken, and self-rated difficulty. They found that GPT-4 did not significantly improve the performance of the participants in any of the metrics, except for a slight increase in accuracy for the student-level group. The researchers also noted that GPT-4 often produced erroneous or misleading responses, which could hamper the biological threat creation process.

Read more