Pope called Cardinal Burke his ‘enemy’ and threatened to strip him of privileges, reports claim
Contrary to reports, OpenAI probably isn’t building humanity-threatening AI
By Thomas Colsy — catholic Herald — Pope Francis has referred to an outspoken American cardinal as his “enemy” and threatened to strip him of his privileges, according to reports from Italy. The New Daily Compass, which claims the rumour has been confirmed by multiple sources in the Vatican, reported that the Pope was overheard […]
AI founders list they need to know for 2024
by Kyle Wiggers@kyle_l_wiggers — TechCrunch Has OpenAI invented an AI technology with the potential to “threaten humanity”? From some of the recent headlines, you might be inclined to think so. Reuters and The Information first reported last week that several OpenAI staff members had, in a letter to the AI startup’s board of directors, flagged the “prowess” and “potential danger” of an internal research project known as “Q*.” This AI project, according to the reporting, could solve certain math problems — albeit only at grade-school level — but had in the researchers’ opinion a chance of building toward an elusive technical breakthrough. There’s now debate as to whether OpenAI’s board ever received such a letter — The Verge cites a source suggesting that it didn’t. But the framing of Q* aside, Q* in actuality might not be as monumental — or threatening — as it sounds. It might not even be new. AI researchers on X (formerly Twitter), including Meta’s chief AI scientist Yann LeCun, were immediately skeptical that Q* was anything more than an extension of existing work at OpenAI — and other AI research labs besides. In a post on X, Rick Lamers, who writes the Substack newsletter Coding with Intelligence, pointed to an MIT guest lecture OpenAI co-founder John Schulman gave seven years ago during which he described a mathematical function called “Q*.”
Several researchers believe the “Q” in the name “Q*” refers to “Q-learning,” an AI technique that helps a model learn and improve at a particular task by taking — and being rewarded for — specific “correct” actions. Researchers say the asterisk, meanwhile, could be a reference to A*, an algorithm for checking the nodes that make up a graph and exploring the routes between these nodes. Both have been around a while. Google DeepMind applied Q-learning to build an AI algorithm that could play Atari 2600 games at human level… in 2014. A* has its origins in an academic paper published in 1968. And researchers at UC Irvine several years ago explored improving A* with Q-learning — which might be exactly what OpenAI’s now pursuing.
Nathan Lambert, a research scientist at the Allen Institute for AI, told TechCrunch he believes that Q* is connected to approaches in AI “mostly [for] studying high school math problems” — not destroying humanity. “OpenAI even shared work earlier this year improving the mathematical reasoning of language models with a technique called process reward models,” Lambert said, “but what remains to be seen is how better math abilities do anything other than make [OpenAI’s AI-powered chatbot] ChatGPT a better code assistant.”
Mapped: The Migration of the World’s Millionaires in 2023
Here’s a short list of posts for AI founders looking ahead to 2024: Startups must add AI value beyond ChatGPT integration: One criticism of startups that claim the mantle of AI is that they are creating thin wrappers around other folk’s technology. This sort of platform risk is not new, but is a pertinent mental model […]
The Top 2 Artificial Intelligence (AI) Companies Revolutionizing the Industry Right Now
The world’s millionaires are on the move, and their migration patterns are shifting. In 2023, 122,000 high net worth individuals (HNWIs) are expected to move to a new country, with Australia reclaiming the top spot as the most popular destination. The United Arab Emirates, Singapore, the United States, and Switzerland round out the top five countries for HNWI inflows. At the other end of the spectrum, China is expected to lose the most HNWIs in 2023, with 13,500 millionaires leaving the country. India, the United Kingdom, Russia, and Brazil follow closely behind.
Why are millionaires moving? The reasons vary, but economic freedom, tax burdens, and investment opportunities are key factors. Singapore, which boasts the highest level of economic freedom in the world, is a popular destination for HNWIs. Greece, despite its economic challenges, is also expected to see a significant influx of millionaires due to its golden visa program.
The impact of HNWI migration goes beyond the economic. It also has geopolitical implications, as governments compete to attract and retain the world’s economic elite.
The Biggest Questions: What is death?
by motley fool — Artificial intelligence (AI) and machine learning (ML) are more than just buzzworthy terms for some cutting-edge companies. They are the foundations on which incredible businesses have been built. Even better, some of these companies make hay in industries essential to the economy. Cybersecurity is top of mind for C-suite executives in all industries, government agencies, school districts, and even nonprofits. Cybercriminals are always on the prowl, costing organizations billions each year. IBM notes that up to 90% of cyberattacks and 70% of breaches come through endpoint devices. AI-powered CrowdStrike Holdings (NASDAQ: CRWD) is the leader in endpoint security with a comprehensive, entirely cloud-based platform. The company’s results are on fire, as I’ll discuss below. Meanwhile, data centers are crucial for cloud applications, data storage, computing power, and (definitely) complex AI and ML software that require massive computing power. Nvidia (NASDAQ: NVDA) is light-years ahead of its competition, and its data center software and hardware are mission critical. This is why its data center revenue rose 171% year over year last quarter to $10.32 billion.
CrowdStrike is firing on all cylinders
CrowdStrike provides comprehensive security with its Falcon platform. The advantages are several: Falcon is cloud-native (no on-premises hardware required), customizable, and uses AI to analyze data and provide real-time protection. The platform is modular, so customers can choose which modules they want or need. This plays into CrowdStrike’s land-and-expand strategy: It gains a customer, proves the platform’s worth, and then the customer adds more modules — creating more revenue. This shows up in the company’s dollar-based net retention rate (DBNR), which has been above 120% dating back to the first quarter of fiscal 2019. DBNR measures the year-over-year increase in sales from an average customer. Above 100% is good, and above 120% is excellent. You can probably guess how the chart of annual recurring revenue (ARR) growth looks: The meteoric rise to $2.9 billion in ARR has enabled CrowdStrike to generate $416 million in free cash flow through the second quarter of this 2024 fiscal year and stack up $3.2 billion in cash against $742 million in long-term debt. Having cash on hand to fund growth is crucial in this environment, and the company likely won’t have to borrow money at unfavorable interest rates.
Sam Altman’s return to OpenAI highlights urgent need for trust and diversity
MIT Technology Review by Rachel Nuwer — Just as birth certificates note the time we enter the world, death certificates mark the moment we exit it. This practice reflects traditional notions about life and death as binaries. We are here until, suddenly, like a light switched off, we are gone. But while this idea of death is pervasive, evidence is building that it is an outdated social construct, not really grounded in biology. Dying is in fact a process—one with no clear point demarcating the threshold across which someone cannot come back. Scientists and many doctors have already embraced this more nuanced understanding of death. As society catches up, the implications for the living could be profound. “There is potential for many people to be revived again,” says Sam Parnia, director of critical care and resuscitation research at NYU Langone Health.
Neuroscientists, for example, are learning that the brain can survive surprising levels of oxygen deprivation. This means the window of time that doctors have to reverse the death process could someday be extended. Other organs likewise seem to be recoverable for much longer than is reflected in current medical practice, opening up possibilities for expanding the availability of organ donations. To do so, though, we need to reconsider how we conceive of and approach life and death. Rather than thinking of death as an event from which one cannot recover, Parnia says, we should instead view it as a transient process of oxygen deprivation that has the potential to become irreversible if enough time passes or medical interventions fail. If we adopt this mindset about death, Parnia says, “then suddenly, everyone will say, ‘Let’s treat it.’”
Microsoft emerges as ultimate winner in OpenAI power struggle, shares jump 1 percent
by Matt Marshall — @mmarshall venturebeat — OpenAI’s announcement last night apparently resolved the saga that has beset it for the last five days: It is bringing back Sam Altman as CEO, and it has agreed on three initial board members – and more is to come. However, as more details emerge from sources about what set off the chaos at the company in the first place, it’s clear the company needs to shore up a trust issue that may potentially bedevil Altman as a result of his recent actions at the company.
It’s also not clear how it intends to clean up remaining thorny governance issues, including its board structure and mandate, that have become confusing and even contradictory. For enterprise decision makers, who are watching this saga, and wondering what this all means to them, and about the credibility of OpenAI going forward, it’s worth looking at the details of how we got here. After doing so, here’s where I’ve come out: The outcome, at least as it looks right now, heralds OpenAI’s continued shift toward a more aggressive stance as a product-oriented business. I predict that OpenAI’s position as a serious contender in providing full-service AI products for enterprises, a role that demands trust and optimal safety, may diminish. However, its language models, specifically ChatGPT and GPT-4, will likely remain highly popular among developers and continue to be used as APIs in a wide range of AI products.
More on that in a second, but first a look at the trust factor that hangs over the company, and how it needs to be dealt with. The good news is that the company has made strong headway by appointing some very credible initial board members, Bret Taylor and Lawrence Summers, and putting some strong guardrails in place. The outgoing board has insisted that an investigation be made into Altman’s leadership, and has blocked Altman and his co-founder Greg Brockman’s return to the board, and have insisted that new board members be strong enough to be able to stand up to Altman, according to the New York Times.
Sam Altman’s Counter-Rebellion Leaves OpenAI Leadership Hanging in the Balance
by Matt Marshall — venturebeat.com — In a masterful move exemplifying urgent and instinctive leadership, Microsoft CEO Satya Nadella, personally jumped into the chaos happening over the weekend at the leading generative AI company, OpenAI, and came out with as much as he possibly could have. Working for hours over the weekend, he negotiated a deal that brings Sam Altman, the ousted OpenAI CEO, over to head up a new subsidiary within Microsoft focused on AI innovation – a group that will also include Open AI co-founder Greg Brockman and other departing employees who supported Altman’s strategy.
On the face of it, this is a huge win for Microsoft, because it gets Altman’s growth DNA in the hottest area of tech: generative AI. Altman and Brockman represented the hard-charging, growth minded product side of OpenAI’s business. OpenAI was raising money at terms that value the company at between $80 and $90 billion, meaning Microsoft would have had to pay tens, if not hundreds, of billions of dollars to acquire OpenAI if it ever wanted to. Now, Microsoft is getting OpenAI’s main assets (its brains), and the OpenAI models will probably follow – all presumably at a massive discount. What a bargain, right? That’s what the stock market thought. Microsoft’s shares jumped by more than 1 percent on the opening of trading Monday morning, valuing the giant at a record $2.78 trillion. Well, let’s see. Until this morning, it looked like OpenAI would remain a functioning company, hell-bent on pursuing safe GenAI. That would have presented a stable picture, with Microsoft owning a meaningful stake in that company too.
But suddenly everything was in flux again as of Monday morning. The vast majority of remaining Open AI employees have reportedly supported a letter sent early this morning to that company’s board that they may quit unless the board resigns and reinstates Altman and Brockman. If the remaining board decides to resign, it’s possible that Altman and Brockman may return and lead OpenAI to even more aggressive growth than ever without the constraints of the safety-focused board. That could still be very good for Microsoft, given that it is the largest investor, and participates in any profits that OpenAI throws off. It’s also true that OpenAI’s strong growth and speed in the area of generative AI may create some tension with Microsoft, which is also seeking to be a leader in the area of enterprise generative AI. However, tension isn’t all that bad, and the existing partnership still gives Microsoft a lot of access to technology and know-how. This would certainly give Microsoft a leg-up in the competition against Amazon AWS and Google in providing powerful AI-infused cloud technology solutions to enterprise companies.
Can Abbas lead Palestinian Authority into Gaza after Israel-Hamas war?
Story by Berber Jin – wsj.com — SAN FRANCISCO—Two days after Sam Altman was ousted from OpenAI, he was back at the company’s office, trying to negotiate his return. The former chief executive officer entered with a guest badge on Sunday and posted on X: “first and last time i ever wear one of these.” The leadership of the company that created the hit AI chatbot ChatGPT remained unclear Sunday, as investors and many employees pushed over the weekend to restore Altman. He has been engineering a countercoup to retake control of one of Silicon Valley’s most valuable and high-profile startups. Altman’s camp has succeeded in bringing the board that fired him to the negotiating table and proposed a series of high-profile tech executives to potentially helm a new board that would be more aligned to his business vision. Names floated include Bret Taylor, the former co-chief executive of Salesforce; Brian Chesky, the chief executive of Airbnb who has been a longtime confidant of Altman’s; and Laurene Powell Jobs, founder and president of Emerson Collective, people familiar with the matter said.
Sheryl Sandberg, the former chief operating officer of Meta Platforms, also came up. Bloomberg previously reported that Taylor is being considered. Microsoft’s executives have also pushed for oversight in a new corporate structure, including a potential board observer seat that would give it more visibility into the company’s governance. Any greater role on the board could be a regulatory concern; Microsoft has kept its ownership stake in OpenAI below the 50% mark to avoid raising the attention of regulators. OpenAI CEO Sam Altman sacked by company over lack of ‘candid communications’ Among all the investors, Microsoft might be the most deeply intertwined in the fate of OpenAI, and the startup’s turmoil has been a liability. Beyond being OpenAI’s largest backer, Microsoft has reoriented its business around the startup’s AI software. Shares in Microsoft fell after the news of Altman’s firing. The abrupt shake-up at OpenAI turns on one of the oldest tales in Silicon Valley: a breakup between a founder and his board. But in this case it was a very particular kind of founder—the face of Silicon Valley’s artificial-intelligence revolution—and a very particular kind of board, which was tasked with making social good a priority over profit. The rupture threatens the future of the company and the billions of dollars investors had put into it.
The Palestinian leadership in Ramallah, under President Mahmoud Abbas, persists with a strategy of prudent neutrality, closely observing the ongoing conflict between Hamas and Israel since October 7. Abbas has not signaled any intention to engage with Hamas or entertain the notion of stepping down to make way for new leadership. The trust deficit between Abbas and Hamas has deepened, with Abbas critiquing Hamas for the precarious situation of Gaza’s people and the Palestinian cause, exacerbated by the global view of Hamas as a terror group, complicating potential collaboration. Conversely, Hamas criticizes Abbas for his perceived inaction against Israeli measures and American policies as the Gaza conflict persists.
Abbas’s low-profile stance, cordial Western relations, and adherence to international norms, he argues, are preserving Palestinian nationalism. In private meetings with US Secretary of State Antony Blinken, the Palestinian leadership received commendation for maintaining calm in the West Bank and averting a third intifada, deemed crucial by Palestinian officials in light of the current Israeli sentiment. Abbas has instructed his security apparatus to ensure the West Bank does not mirror Gaza’s turmoil. Despite heightened tensions and provocations from Israeli settlers and hardline politicians, Abbas’s patience is deemed necessary, albeit politically contentious. The Americans, post-conflict, have pledged to support a two-state resolution encompassing the West Bank, parts of Jerusalem, and Gaza. They have reassured Abbas of maintaining the PLO’s central role in Gaza’s future, highlighting its exclusive representation of the Palestinian people.