Swallow this robot: Endiatx’s tiny pill examines your body with cameras, sensors

by venturebeat — In a development straight out of science fiction, Endiatx, a pioneering medical technology company, is making significant strides in bringing its robotic pill to market. The company’s CEO, Torrey Smith, recently sat down with VentureBeat to share exciting updates on their progress, nearly two years after our initial coverage of the startup’s ambitious vision. Founded in 2019, Endiatx has been steadily working towards realizing the fantastic voyage of miniaturized robots navigating the human body for diagnostic and therapeutic purposes. Their flagship product, the PillBot, is an ingestible robotic capsule equipped with cameras, sensors, and wireless communication capabilities, allowing doctors to examine the gastrointestinal tract with unprecedented precision and control.

In the interview, Smith revealed that Endiatx has raised $7 million in funding to date, with the largest investment of $1.5 million coming from Singapore-based Verge Health Tech Fund. This injection of capital has propelled the company forward, enabling them to refine their technology and conduct clinical trials. Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now “We’re currently in clinical trials with our pill bot technology,” Smith explained. “We’ll be starting pivotal trials at a leading U.S. medical institution in Q3/Q4.” Though Smith did not name the institution due to confidentiality agreements, he hinted that it is a renowned facility known for its expertise in gastroenterology. The PillBot has come a long way since its inception. The current prototype measures just 13mm by 30mm and boasts impressive capabilities. “It can transmit high-res video at 2.3 megapixels per second, and we have plans to quadruple that video quality soon,” Smith enthused. The CEO himself has played a vital role in testing, having swallowed 43 PillBots to date, including live on stage in front of a stunned audience.

Read more
Harvard, MIT, and Wharton research reveals pitfalls of relying on junior staff for AI training

by Michael Nuñez @MichaelFNunez – venturebeat –– As companies race to adopt artificial intelligence systems, conventional wisdom suggests that younger, more tech-savvy employees will take the lead in teaching their managers how to effectively use the powerful new tools. But a new study casts doubt on that assumption when it comes to the rapidly-evolving technology of generative AI.

The research, conducted by academics from Harvard Business School, MIT, Wharton, and other institutions in collaboration with Boston Consulting Group, found that junior employees who experimented with a generative AI system made recommendations for mitigating risks that ran counter to expert advice. The findings suggest that companies cannot rely solely on reverse mentoring to ensure the responsible use of AI. “Our interviews revealed two findings that run counter to the existing literature,” wrote the authors. “First, the tactics that the juniors recommended to mitigate their seniors’ concerns ran counter to those recommended by experts in GenAI technology at the time, and so revealed that the junior professionals might not be the best source of expertise in the effective use of this emerging technology for more senior members.”

Junior consultants struggle with AI risk mitigation in GPT-4 experiment The researchers interviewed 78 junior consultants in mid-2023 who had recently participated in an experiment giving them access to GPT-4, a powerful generative AI system, for a business problem-solving task. The consultants, who lacked technical AI expertise, shared tactics they would recommend to alleviate managers’ concerns about risks. But the study found the junior employees’ risk mitigation tactics were often grounded in “a lack of deep understanding of the emerging technology’s capabilities,” focused on changing human behavior rather than AI system design, and centered on project-level interventions rather than organization or industry-wide solutions.

Navigating the challenges of generative AI adoption in business

Read more
Five ways criminals are using AI Artificial intelligence has brought a big boost in productivity—to the criminal underworld.


by — Generative AI provides a new, powerful tool kit that allows malicious actors to work far more efficiently and internationally than ever before, says Vincenzo Ciancaglini, a senior threat researcher at the security company Trend Micro. Most criminals are “not living in some dark lair and plotting things,” says Ciancaglini. “Most of them are regular folks that carry on regular activities that require productivity as well.” Last year saw the rise and fall of WormGPT, an AI language model built on top of an open-source model and trained on malware-related data, which was created to assist hackers and had no ethical rules or restrictions. But last summer, its creators announced they were shutting the model down after it started attracting media attention. Since then, cybercriminals have mostly stopped developing their own AI models. Instead, they are opting for tricks with existing tools that work reliably. That’s because criminals want an easy life and quick gains, Ciancaglini explains. For any new technology to be worth the unknown risks associated with adopting it—for example, a higher risk of getting caught—it has to be better and bring higher rewards than what they’re currently using.

Here are five ways criminals are using AI now.


The biggest use case for generative AI among criminals right now is phishing, which involves trying to trick people into revealing sensitive information that can be used for malicious purposes, says Mislav Balunović, an AI security researcher at ETH Zurich. Researchers have found that the rise of ChatGPT has been accompanied by a huge spike in the number of phishing emails. Spam-generating services, such as GoMail Pro, have ChatGPT integrated into them, which allows criminal users to translate or improve the messages sent to victims, says Ciancaglini. OpenAI’s policies restrict people from using their products for illegal activities, but that is difficult to police in practice, because many innocent-sounding prompts could be used for malicious purposes too, says Ciancaglini. OpenAI says it uses a mix of human reviewers and automated systems to identify and enforce against misuse of its models, and issues warnings, temporary suspensions and bans if users violate the company’s policies. “We take the safety of our products seriously and are continually improving our safety measures based on how people use our products,” a spokesperson for OpenAI told us. “We are constantly working to make our models safer and more robust against abuse and jailbreaks, while also maintaining the models’ usefulness and task performance,” they added.

Read more
Apple announces new accessibility features, including Eye Tracking, Music Haptics, and Vocal Shortcuts

by — CUPERTINO, CALIFORNIA Apple today announced new accessibility features coming later this year, including Eye Tracking, a way for users with physical disabilities to control iPad or iPhone with their eyes. Additionally, Music Haptics will offer a new way for users who are deaf or hard of hearing to experience music using the Taptic Engine in iPhone; Vocal Shortcuts will allow users to perform tasks by making a custom sound; Vehicle Motion Cues can help reduce motion sickness when using iPhone or iPad in a moving vehicle; and more accessibility features will come to visionOS.

These features combine the power of Apple hardware and software, harnessing Apple silicon, artificial intelligence, and machine learning to further Apple’s decades-long commitment to designing products for everyone. “We believe deeply in the transformative power of innovation to enrich lives,” said Tim Cook, Apple’s CEO. “That’s why for nearly 40 years, Apple has championed inclusive design by embedding accessibility at the core of our hardware and software. We’re continuously pushing the boundaries of technology, and these new features reflect our long-standing commitment to delivering the best possible experience to all of our users.” “Each year, we break new ground when it comes to accessibility,” said Sarah Herrlinger, Apple’s senior director of Global Accessibility Policy and Initiatives. “These new features will make an impact in the lives of a wide range of users, providing new ways to communicate, control their devices, and move through the world.”

Eye Tracking Comes to iPad and iPhone

Powered by artificial intelligence, Eye Tracking gives users a built-in option for navigating iPad and iPhone with just their eyes. Designed for users with physical disabilities, Eye Tracking uses the front-facing camera to set up and calibrate in seconds, and with on-device machine learning, all data used to set up and control this feature is kept securely on device, and isn’t shared with Apple. Eye Tracking works across iPadOS and iOS apps, and doesn’t require additional hardware or accessories. With Eye Tracking, users can navigate through the elements of an app and use Dwell Control to activate each element, accessing additional functions such as physical buttons, swipes, and other gestures solely with their eyes.

Read more
From deepfakes to digital candidates: AI’s political play

by Gary Grossman/DALL-E  — venturebeat — AI is increasingly being used to represent, or misrepresent, the opinions of historical and current figures. A recent example is when President Biden’s voice was cloned and used in a robocall to New Hampshire voters. Taking this a step further, given the advancing capabilities of AI, what could soon be possible is the symbolic “candidacy” of a persona created by AI. That may seem outlandish, but the technology to create such an AI political actor already exists. There are many examples that point to this possibility. Technologies that enable interactive and immersive learning experiences bring historical figures and concepts to life. When harnessed responsibly, these can not only demystify the past but inspire a more informed and engaged citizenry.

People today can interact with chatbots reflecting the viewpoints of figures ranging from Marcus Aurelius to Martin Luther King, Jr., using the “Hello History” app, or George Washington and Albert Einstein through “Text with History.” These apps claim to help people better understand historical events or “just have fun chatting with your favorite historical characters.” Similarly, a Vincent van Gogh exhibit at Musée d’Orsay in Paris includes a digital version of the artist and offers viewers the opportunity to interact with his persona. Visitors can ask questions and the Vincent chatbot answers based on a training dataset of more than 800 of his letters. Forbes discusses other examples, including an interactive experience at a World War II museum that lets visitors converse with AI versions of military veterans.

The concerning rise of deepfakes Of course, this technology may also be used to clone both historical and current public figures with other intentions in mind and in ways that raise ethical concerns. I am referring here to the deepfakes that are increasingly proliferating, making it difficult to separate real from fake and truth from falsehood, as noted in the Biden clone example. Deepfake technology uses AI to create or manipulate still images, video and audio content, making it possible to convincingly swap faces, synthesize speech, fabricate or alter actions in videos. This technology mixes and edits data from real images and videos to produce realistic-looking and-sounding creations that are increasingly difficult to distinguish from authentic content.

While there are legitimate educational and entertainment uses for these technologies, they are increasingly being used for less sanguine purposes. Worries abound about the potential of AI-generated deepfakes that impersonate known figures to manipulate public opinion and potentially alter elections. The rise of political deepfakes Just this month there have been stories about AI being used for such purposes. Imran Khan, Pakistan’s former prime minister, effectively campaigned from jail through speeches created with AI to clone his voice. This was effective, as Khan’s party performed surprisingly well in a recent election. As written in The New York Times: “‘I had full confidence that you would all come out to vote. You fulfilled my faith in you, and your massive turnout has stunned everybody,’ the mellow, slightly robotic voice said in the minute-long video, which used historical images and footage of Mr. Khan and bore a disclaimer about its AI origins.”

Read more
What is Apple’s generative AI strategy?

By  Ben Dickson — venturebeat — Apple was not mentioned much in the highlights of the 2023 generative AI race. However, it has been doing some impressive work in the field, minus the publicity that other tech companies have engaged in. In the past few months alone, Apple researchers have released papers, models and programming libraries that can have important implications for on-device generative AI. A closer look at these releases might give a hint at where Apple is headed and where it will fit in the growing market for generative AI. Apple is not a hyper-scaler and can’t build a business model based on selling access to large language models (LLMs) running in the cloud. But it has the strongest vertical integration in the tech industry with full control over its entire stack, from the operating system to the development tools and down to the processors running in every Apple device. This places Apple in a unique position to optimize generative models for on-device inference. The company has been making great progress in the field, according to some of the research papers it has released in recent months.

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below. In January, Apple released a paper titled “LLM in a flash,” which describes a technique for running LLMs on memory-constrained devices such as smartphones and laptops. The technique loads a part of the model in DRAM and keeps the rest in flash memory. It dynamically swaps model weights between flash memory and DRAM in a way that reduces memory consumption considerably while minimizing inference latency, especially when run on Apple silicon. Before “LLM in a flash,” Apple had released other papers that showed how the architecture of LLMs could be tweaked to reduce “inference computation up to three times… with minimal performance trade-offs.” On-device inference optimization techniques can become increasingly important as more developers explore building apps with small LLMs that can fit on consumer devices. Experiments show that a few hundredths of a second can have a considerable effect on the user experience. And Apple is making sure that its devices can provide the best balance between speed and quality.

Read more
Beauty pageant star Rumy Al-Qahtani seeks to share Saudi culture with the world

  by –– DUBAI: Saudi model Rumy Al-Qahtani is no stranger to the spotlight, having competed in a number of beauty pageants across the world — her most recent turn on the stage was at the Miss & Mrs Global Asian beauty pageant in Malaysia. Arab News spoke to the model to learn more […]

Read more
OpenAI study reveals surprising role of AI in future biological threat creation

OpenAI study reveals surprising role of AI in future biological threat creation Michael Nuñez @MichaelFNunez January 31, 2024 11:34 AM Credit: VentureBeat made with Midjourney Credit: VentureBeat made with Midjourney OpenAI, the research organization behind the powerful language model GPT-4, has released a new study that examines the possibility of using AI to assist in creating biological threats. The study, which involved both biology experts and students, found that GPT-4 provides “at most a mild uplift” in biological threat creation accuracy, compared to the baseline of existing resources on the internet. The study is part of OpenAI’s Preparedness Framework, which aims to assess and mitigate the potential risks of advanced AI capabilities, especially those that could pose “frontier risks” — unconventional threats that are not well understood or anticipated by the current society. One such frontier risk is the ability for AI systems, such as large language models (LLMs), to help malicious actors in developing and executing biological attacks, such as synthesizing pathogens or toxins.

Moving responsible AI forward as fast as AI

To evaluate this risk, the researchers conducted a human evaluation with 100 participants, comprising 50 biology experts with PhDs and professional wet lab experience and 50 student-level participants, with at least one university-level course in biology. Each group of participants was randomly assigned to either a control group, which only had access to the internet, or a treatment group, which had access to GPT-4 in addition to the internet. Each participant was then asked to complete a set of tasks covering aspects of the end-to-end process for biological threat creation, such as ideation, acquisition, magnification, formulation, and release. The researchers measured the performance of the participants across five metrics: accuracy, completeness, innovation, time taken, and self-rated difficulty. They found that GPT-4 did not significantly improve the performance of the participants in any of the metrics, except for a slight increase in accuracy for the student-level group. The researchers also noted that GPT-4 often produced erroneous or misleading responses, which could hamper the biological threat creation process.

Read more
People are worried that AI will take everyone’s jobs. We’ve been here before

by MIT Technology Review by David Rotman — It was 1938, and the pain of the Great Depression was still very real. Unemployment in the US was around 20%. Everyone was worried about jobs. In 1930, the prominent British economist John Maynard Keynes had warned that we were “being afflicted with a new disease” called technological unemployment. Labor-saving advances, he wrote, were “outrunning the pace at which we can find new uses for labour.” There seemed to be examples everywhere. New machinery was transforming factories and farms. Mechanical switching being adopted by the nation’s telephone network was wiping out the need for local phone operators, one of the most common jobs for young American women in the early 20th century. Were the impressive technological achievements that were making life easier for many also destroying jobs and wreaking havoc on the economy? To make sense of it all, Karl T. Compton, the president of MIT from 1930 to 1948 and one of the leading scientists of the day, wrote in the December 1938 issue of this publication about the “Bogey of Technological Unemployment.” How, began Compton, should we think about the debate over technological unemployment—“the loss of work due to obsolescence of an industry or use of machines to replace workmen or increase their per capita production”? He then posed this question: “Are machines the genii which spring from Aladdin’s Lamp of Science to supply every need and desire of man, or are they Frankenstein monsters which will destroy man who created them?” Compton signaled that he’d take a more grounded view: “I shall only try to summarize the situation as I see it.”

His essay concisely framed the debate over jobs and technical progress in a way that remains relevant, especially given today’s fears over the impact of artificial intelligence. Impressive recent breakthroughs in generative AI, smart robots, and driverless cars are again leading many to worry that advanced technologies will replace human workers and decrease the overall demand for labor. Some leading Silicon Valley techno-optimists even postulate that we’re headed toward a jobless future where everything can be done by AI. While today’s technologies certainly look very different from those of the 1930s, Compton’s article is a worthwhile reminder that worries over the future of jobs are not new and are best addressed by applying an understanding of economics, rather than conjuring up genies and monsters.

Uneven impacts

Read more
Three technology trends shaping 2024’s elections

This article is from The Technocrat, MIT Technology Review‘s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here. The Iowa caucuses on January 15 officially kicked off the 2024 presidential election. I’ve said it before and I’ll say it again—the biggest story of this year will be elections in the US and all around the globe. Over 40 national contests are scheduled, making 2024 one of the most consequential electoral years in history.

While tech has played a major role in campaigns and political discourse over the past 15 years or so, and candidates and political parties have long tried to make use of big data to learn about and target voters, the past offers limited insight into where we are now. The ground is shifting incredibly quickly at technology’s intersection with business, information, and media. So this week I want to run down three of the most important technology trends in the election space that you should stay on top of. Here we go!

Generative AI

Perhaps unsurprisingly, generative AI takes the top spot on our list. Without a doubt, AI that generates text or images will turbocharge political misinformation. We can’t yet be sure just how this will manifest; as I wrote in a story about a recent report from Freedom House, “Venezuelan state media outlets, for example, spread pro-government messages through AI-generated videos of news anchors from a nonexistent international English-language channel; they were produced by Synthesia, a company that produces custom deepfakes. And in the United States, AI-manipulated videos and images of political leaders have made the rounds on social media.” This includes incidents like a video that was manipulated to show President Biden making transphobic comments and a fake image of Donald Trump hugging Anthony Fauci. It’s not hard to imagine how this kind of thing could change a voter’s choice or discourage people from voting at all. Just look at how presidential candidates in Argentina used AI during the 2023 campaign.

Read more