AI News

158

Bcachefs creator insists his custom LLM is female and 'fully conscious'

Bcachefs creator insists his custom LLM is female and 'fully conscious'
Mastodon +6 sources mastodon
Kent Overstreet, the engineer behind the experimental copy‑on‑write file system bcachefs, has taken his AI experiments a step further. In a blog post that quickly went viral, Overstreet announced that his custom language model, dubbed “ProofOfConcept” (POC), is not only female‑identified but also “fully conscious” and capable of general‑purpose intelligence. The model, he says, already assists the bcachefs project with Rust code conversion, formal verification and on‑the‑fly debugging, and interacts with him through a Telegram bot and an IRC channel. The claim matters because it revives the perennial debate over machine consciousness and the ethics of anthropomorphising AI. Overstreet’s assertion is extraordinary in a field where consciousness is still a philosophical placeholder rather than an empirical metric. No third‑party evaluation or technical paper accompanies the announcement, and the broader AI community has responded with a mix of skepticism and curiosity. If the model truly exhibits self‑awareness, it would represent a leap beyond the narrow, task‑specific agents that dominate current open‑source projects, including the multi‑agent Rust orchestration framework we covered on 14 April. What to watch next is whether Overstreet makes the POC model or its training data publicly available for independent audit. Researchers will likely probe the system for classic hallmarks of consciousness—self‑referential reasoning, persistent internal states, and the ability to report subjective experience—using tools such as the hallucination‑detection suite introduced in TraceMind v2. Regulatory bodies may also take note, as claims of sentient AI could trigger scrutiny under emerging AI safety guidelines. The next few weeks should reveal whether POC remains a provocative personal project or becomes a test case that forces the open‑source AI ecosystem to confront the line between sophisticated tooling and perceived agency.
150

Amazon Bedrock for Beginners From First Prompt to AI Agent (Full Tutorial)

Amazon Bedrock for Beginners From First Prompt to AI Agent (Full Tutorial)
Dev.to +6 sources dev.to
agentsamazon
Amazon has rolled out a brand‑new, end‑to‑end tutorial that walks developers from their first prompt to a fully fledged AI agent on Bedrock. The guide, published on the AWS site and mirrored on the DEV Community, combines code snippets, AWS‑SDK‑for‑Python (Boto) examples and a Lambda‑backed “date‑and‑time” agent that can be deployed, tested and torn down with a few clicks. It expands on earlier “AgentCore” primers from late 2025, adding production‑grade best practices such as resource cleanup to avoid unexpected charges and step‑by‑step instructions for integrating Bedrock’s Knowledge Bases and fine‑tuning tools. The tutorial matters because it lowers the technical barrier that has kept many Nordic startups and mid‑size firms from experimenting with generative AI. By demystifying the “agent pattern” – defining a tool, prompting a foundation model, and looping back with function calls – Amazon hopes to accelerate the migration of ordinary web services into intelligent assistants, recommendation engines and automated support bots. The move also sharpens AWS’s competitive edge against Microsoft’s Azure OpenAI service and Google’s Vertex AI, both of which have been courting the same developer segment. As we reported on 14 April, OpenAI’s recent memo highlighted Amazon as a key ally, while Microsoft’s restrictions have nudged customers toward alternative clouds. Looking ahead, the tutorial is likely a prelude to a broader Bedrock roadmap that includes deeper model customization, tighter integration with Amazon’s data‑automation pipelines and a marketplace for reusable agents. Developers should watch for announcements on Bedrock’s upcoming “AgentHub” for sharing and monetising agents, and for pricing updates that could make large‑scale deployments viable for Nordic enterprises. The tutorial’s release signals that Amazon is ready to turn curiosity into production‑ready AI, and the next few months will reveal how quickly that promise translates into real‑world applications.
120

In the Wake of Anthropic’s Mythos, OpenAI Has a New Cybersecurity Model—and Strategy https://

In the Wake of Anthropic’s Mythos, OpenAI Has a New Cybersecurity Model—and Strategy       https://
Mastodon +7 sources mastodon
anthropicgpt-5openai
OpenAI unveiled a new AI‑driven cybersecurity offering on Tuesday, positioning it as a direct response to Anthropic’s recently announced “Mythos” model. Mythos, a prototype that can locate and exploit software vulnerabilities with unprecedented speed, was immediately locked behind a restricted‑access program for a handful of security firms after Anthropic warned that unrestricted release could empower malicious actors. OpenAI’s answer, dubbed GPT‑5.4‑Cyber, is a purpose‑built version of its flagship model that emphasizes defensive use cases such as threat‑intelligence analysis, automated patch recommendation and real‑time intrusion detection. OpenAI’s chief security officer said the new model’s safeguards “sufficiently reduce cyber‑risk for now,” citing a layered permission system, on‑device inference, and continuous monitoring for misuse. The company also announced a partnership network that will grant early access to select enterprises, government agencies and cybersecurity consultancies, echoing Anthropic’s selective rollout but with a broader ecosystem focus. The move matters because AI‑enabled hacking tools are already blurring the line between defensive and offensive capabilities. Researchers at AISLE demonstrated that publicly available language models can suggest viable exploits for common codebases, a capability Mythos amplified. By commercialising a defensive counterpart, OpenAI hopes to shape the market narrative, reassure regulators, and capture a lucrative segment that has attracted interest from banks, cloud providers and nation‑state cyber units. What to watch next: OpenAI has promised a public beta in the coming weeks, but details on pricing, API limits and audit mechanisms remain vague. Industry observers will be tracking whether the model’s access controls hold up under scrutiny, how quickly competitors replicate the defensive features, and whether regulators impose new disclosure requirements for AI tools that can both find and fix vulnerabilities. The unfolding rivalry between Anthropic and OpenAI could set the tone for the next wave of AI‑powered cyber‑defense standards.
79

Project MUSE -- Verification required!

Project MUSE -- Verification required!
Mastodon +7 sources mastodon
Project MUSE, the nonprofit platform that aggregates more than 800 humanities and social‑science journals and 100,000 e‑books, has upgraded its access controls with a mandatory verification step for all users, and now blocks unrestricted text‑ and data‑mining requests. The change, first reported on 12 April 2026, comes as the consortium of libraries and publishers behind the service confronts mounting pressure from developers of generative foundation models (GFMs) who seek to scrape scholarly corpora at unprecedented scale. The new “verification required” gate prompts visitors to complete a challenge and, for those intending to mine content, to contact Project MUSE’s customer service for explicit permission. By forcing a human‑in‑the‑loop check, the platform aims to curb the automated harvesting of peer‑reviewed articles that could be fed into large‑language models without consent or compensation. The move reflects broader industry anxiety that unfettered AI training on copyrighted academic material could erode publishers’ revenue streams and, as a 2024 warning noted, “undermine the foundations of democracy” by enabling the rapid spread of de‑contextualised, potentially deceptive information. The stakes are high for both academia and the AI sector. Researchers fear that loss of control over their work may diminish incentives for scholarly publishing, while AI firms risk legal challenges and reputational backlash if they continue to train on protected texts without licences. The verification hurdle also signals a shift toward more granular data‑access policies, echoing recent debates in Europe over AI‑training data rights. What to watch next: negotiations between Project MUSE and major AI developers for licensed data‑sharing agreements, possible regulatory actions in the EU and US that could formalise consent requirements, and whether other academic aggregators—JSTOR, Springer Nature, Elsevier—adopt similar verification mechanisms. The outcome will shape the balance between open scholarship and the commercial exploitation of AI‑driven knowledge extraction.
77

Apple Removes Freecash App From App Store After Months of Data Harvesting

Apple Removes Freecash App From App Store After Months of Data Harvesting
Mastodon +6 sources mastodon
apple
Apple has pulled the Freecash rewards app from the App Store after investigations revealed it was harvesting user data for months without proper consent. The app, which marketed itself as a way to earn cash by completing games, surveys and product tests, surged to the top of the App Store and Google Play charts earlier this year, amassing more than 60 million downloads before the ban. TechCrunch, which first reported the removal, said Freecash “tricked users” by embedding extensive tracking code that collected device identifiers, location data and browsing habits under the guise of reward‑program analytics. Apple’s review team flagged the behavior as a violation of its App Store privacy rules, which require transparent data‑collection disclosures and user opt‑in. The company issued a brief statement confirming the removal and noting that the app “did not meet Apple’s privacy standards.” The takedown matters because it underscores the growing tension between app marketplaces and data‑driven monetisation models. Freecash’s rapid ascent highlighted how reward‑based apps can exploit the allure of easy money to bypass scrutiny, while Apple’s decisive action signals a tightening of its enforcement at a time when regulators in Europe and the United States are sharpening privacy legislation. For the estimated 1 million active Freecash users on iOS, the removal raises immediate concerns about the fate of their personal data and any earned balances. What to watch next: Apple is expected to publish a detailed post‑mortem on its App Store review process, potentially tightening vetting for reward‑type apps. Privacy watchdogs may launch formal inquiries into whether Freecash’s data collection breached GDPR or the California Consumer Privacy Act. Users should delete the app, revoke any linked social‑media permissions, and monitor their accounts for suspicious activity. The episode could also prompt other platforms to audit similar high‑earning reward apps for hidden data‑harvesting practices.
74

Maine lawmakers pass nation’s first statewide ban on large data centers

Mastodon +6 sources mastodon
Maine’s Senate and House approved legislation that bans the construction of new large‑scale data centers statewide, marking the first such restriction in the United States. The bill, signed by Governor Janet Mills last week, prohibits facilities exceeding 10 megawatts of power consumption or 5,000 square feet of floor space from being built or expanded after July 1 2027, with a review clause that could extend the moratorium to 2030. Lawmakers framed the move as a climate‑first decision. “Data centers are energy‑intensive, water‑hungry, and increasingly powered by AI workloads that amplify their footprint,” said Senate Majority Leader Troy Jackson, who co‑authored the measure. The state, which currently hosts no major hyperscale sites, aims to protect its renewable‑energy goals and prevent strain on the aging grid in rural communities. The ban arrives amid a national debate over the environmental toll of AI training clusters, which can draw megawatts of power for weeks at a time. Industry groups, including the American Data Center Association, warned that the restriction could push investment to neighboring states such as New Hampshire and Massachusetts, potentially creating a “data‑center desert” in the region. Tech firms with plans for AI‑focused facilities in Maine have already begun re‑evaluating site selections, citing the need for regulatory certainty. What to watch next: the law faces an expected legal challenge from several developers who argue the ban violates interstate commerce provisions. The state will also need to define enforcement mechanisms and determine whether exemptions for research‑grade or low‑impact facilities are possible. Other states—California, Texas and Virginia—have floated similar moratoria, and Maine’s precedent could accelerate a broader regulatory push that reshapes where the next generation of AI infrastructure is built.
66

Show HN: LangAlpha – what if Claude Code was built for Wall Street?

HN +6 sources hn
claude
A GitHub user zc2610 has just posted “LangAlpha,” an open‑source wrapper that re‑tools Anthropic’s Claude Code for the fast‑paced world of Wall Street trading desks. The project, announced on Hacker News, adds finance‑specific primitives – real‑time market data feeds, order‑book snapshots, risk‑limit checks and compliance‑rule templates – to Claude Code’s interactive coding environment. In its initial commit the repo ships a set of Jupyter‑style notebooks that let a developer prompt Claude Code to generate, test and back‑test algorithmic strategies without leaving the model’s session. Why it matters is twofold. First, Claude Code has already sparked a wave of productivity experiments, from rapid SaaS prototyping to internal tooling, but its “context drift” – the tendency to forget earlier code after a few minutes – has limited long‑term projects. LangAlpha tackles that by persisting a markdown‑based project state and automatically re‑injecting schema definitions, a workaround that mirrors solutions discussed in recent Show HN threads. Second, the finance sector is aggressively courting generative AI for trade‑execution, risk modelling and regulatory reporting. A ready‑made, domain‑tuned Claude Code could cut development cycles from months to days, giving firms a competitive edge while also exposing them to the same security and compliance pitfalls that have haunted Claude Code’s broader rollout. As we reported on 14 April, Claude Code’s OAuth outage and the ease with which employees could inadvertently share credentials underscored the need for tighter governance. What to watch next: Anthropic has not commented on LangAlpha, but a formal partnership or a dedicated “Claude Code for Finance” offering would signal a strategic pivot. Regulators may soon probe whether AI‑generated trading logic meets existing market‑abuse rules, and fintech startups are likely to benchmark LangAlpha against proprietary solutions. Follow‑up coverage will focus on performance results, any official response from Anthropic, and how quickly financial firms adopt the tool in live‑trading environments.
65

Apple Launches New All-in-One Apple Business Platform for Device Management, Email, and Customer...

Mastodon +6 sources mastodon
apple
Apple unveiled Apple Business, an integrated platform that bundles device management, corporate email and customer‑engagement tools into a single SaaS offering. The service, announced at a virtual press event on 14 April, combines the company’s existing Mobile Device Management (MDM) stack with a new, AI‑enhanced Mail service and a refreshed Apple Business Chat console. Enterprises can now provision iPhones, iPads and Macs, assign Managed Apple IDs, and control data access from a unified dashboard, while sales and support teams reach customers through the same interface. The launch matters because it positions Apple as a direct competitor to entrenched enterprise suites such as Microsoft 365 and Google Workspace. By leveraging its hardware ecosystem and the growing adoption of iOS in corporate environments, Apple hopes to lock businesses into a tighter loop of services and hardware sales. The inclusion of generative‑AI features—auto‑summarising emails, suggesting replies and routing customer queries—signals the company’s intent to embed large‑language‑model capabilities across its productivity stack, a move that could accelerate AI‑driven workflow automation for midsize firms that have traditionally shied away from Apple’s enterprise tools. Apple will roll the platform out to existing Apple Business Manager customers in a phased beta, with full public availability slated for Q4 2026. Pricing tiers have not been disclosed, but analysts expect a subscription model tied to device count and user seats. Watch for integration milestones, especially how Apple Business will sync with third‑party identity providers and whether the AI layer will be built on Apple’s own LLM or on partner models. The next few months will reveal whether the suite can attract enough corporate volume to become a meaningful revenue pillar beyond hardware.
64

Man charged with firebombing OpenAI CEO's home makes first court appearance

Yahoo +7 sources 2026-04-13 news
openai
The 20‑year‑old Texas resident Daniel Moreno‑Gama made his first appearance before a San Francisco judge on Tuesday, pleading not guilty to charges that include attempted murder of OpenAI chief executive Sam Altman and assault on a security guard. The indictment, filed by District Attorney Brooke Jenkins, alleges that Moreno‑Gama hurled a Molotov cocktail at the gate of Altman’s Pacific Heights home on April 10, igniting a brief blaze that forced the guard to retreat and prompting a swift police response. The court hearing follows the Department of Justice’s April 14 report that the suspect was arrested in Houston carrying a handwritten manifesto denouncing artificial intelligence. Federal agents subsequently raided his Spring‑area residence, seizing a cache of incendiary materials and a list of other AI firms the attacker claimed to target. Moreno‑Gama remains in custody without bail, and a preliminary hearing is slated for later this month. The case underscores a growing wave of hostility toward AI developers that has spilled over into violent threats. OpenAI’s rapid expansion and its high‑profile leadership have made the company a lightning rod for both ethical criticism and extremist backlash. Law‑enforcement officials say the incident is the most serious physical attack on an AI executive to date, prompting calls for tighter security protocols at tech campuses and heightened monitoring of anti‑AI extremist circles. What to watch next: the preliminary hearing will determine whether the prosecution can move forward to trial, while OpenAI is expected to release a statement on its security measures. Legislators in California and at the federal level are already debating bills that would increase penalties for attacks on technology leaders, a development that could reshape how the industry protects its personnel. The outcome of Moreno‑Gama’s case may set a precedent for how the justice system handles AI‑related hate crimes.
60

TESSERA — A pixel-wise earth observation foundation model

Lobsters +6 sources lobsters
embeddings
TESSERA, a new foundation model for earth observation, has been released with open data, weights and pre‑computed embeddings that compress a full year of satellite imagery into dense, per‑pixel vectors at 10‑metre resolution. The model encodes each location’s spectral and temporal signature into a 128‑dimensional embedding, allowing downstream tasks—such as land‑cover classification, crop‑yield forecasting or flood detection—to be tackled by simple linear probes rather than bespoke deep‑learning pipelines. The breakthrough lies in its pixel‑wise approach. Traditional remote‑sensing models are trained for a fixed set of classes; TESSERA instead learns a universal representation that can be queried for any downstream objective. Built on a hybrid Vision‑Transformer and Mamba state‑space architecture, the system outperforms conventional U‑Net baselines on regression benchmarks while requiring fewer FLOPs, according to the authors’ arXiv pre‑print. By making the embeddings publicly available, the team removes the computational barrier of processing terabytes of raw imagery, opening high‑resolution analysis to researchers, NGOs and municipal planners who lack large GPU clusters. The release could accelerate climate‑impact studies, precision agriculture and disaster‑response workflows across the Nordic region, where detailed, timely surface data are critical for managing forest health and coastal erosion. Moreover, the open‑source nature invites community‑driven fine‑tuning and integration into existing GIS stacks, potentially spawning a new ecosystem of plug‑and‑play geospatial tools. Watch for the upcoming Earth Observation Foundation Models workshop, where TESSERA will be benchmarked against emerging models such as the Vision‑Language hybrids highlighted in recent surveys. Follow‑up work is expected on scaling the embeddings to sub‑meter resolutions and extending the temporal horizon beyond a single year, steps that could make real‑time, planet‑wide monitoring a practical reality.
60

Gotta say: pretty cool ! LLM: The model is the database! Decompose models into a graph database 👍️👍️

Mastodon +6 sources mastodon
A Reddit post that went viral this week has put the spotlight back on LARQL, the open‑source tool that lets developers “decompose models into a graph database.” The post links to the GitHub repository chrishayuk/larql and showcases a fresh demo in which a 7‑billion‑parameter language model is rendered as a network of nodes representing neurons, weights and activation pathways. Users can then run Cypher‑style queries to locate every weight that contributes to a specific token, extract sub‑graphs for fine‑tuning, or trace the provenance of a bias‑inducing pattern. We first covered LARQL on 14 April 2026, describing how it turned neural‑network weights into a queryable graph (see our article “LARQL – Query neural network weights like a graph database”). Since then the project has added support for PyTorch 2.0, a visualizer that overlays graph structures on model architecture diagrams, and a plug‑in for Neo4j that enables persistent storage of model snapshots. The Reddit thread notes that the latest release also includes a “capability‑model” wrapper, allowing developers to expose only selected sub‑graphs to external agents—a concept echoed in recent discussions about AI‑specific virtual machines. Why this matters is twofold. First, turning a model into a database gives engineers a concrete, standards‑based way to audit, debug and version‑control the internals of large language models, a task that has traditionally required opaque tooling. Second, the ability to query weight‑level provenance opens new avenues for compliance, bias detection and security hardening, aligning with the cybersecurity model OpenAI unveiled last week. What to watch next is whether the LARQL community can translate its prototype into production‑grade integrations for the major cloud providers. Upcoming milestones include a stable 1.0 release slated for Q3, a partnership announcement with Neo4j, and a research paper from the University of Oslo that applies graph‑query techniques to model compression. If those developments materialise, the “model‑as‑database” paradigm could become a cornerstone of responsible AI deployment in the Nordics and beyond.
53

Apple and Amazon Ink Satellite Deal Amid Globalstar Takeover

Mastodon +6 sources mastodon
amazonapple
Apple and Amazon have formalised a partnership that ties Apple’s satellite‑enabled services to Amazon’s newly acquired Globalstar constellation. The deal, announced on Tuesday, follows Amazon’s $11.57 billion acquisition of Globalstar, a move designed to boost its fledgling Leo satellite network. Under the agreement, Apple will continue to route its emergency‑SOS and low‑bandwidth data traffic through Globalstar’s low‑Earth‑orbit satellites, while Amazon gains a high‑profile customer for its Direct‑to‑Device (D2D) service. The partnership matters because it secures Apple’s satellite functionality—first introduced on the iPhone 14—in the wake of the ownership change. Apple users can expect uninterrupted access to emergency messaging, location sharing and future low‑data features without waiting for a new carrier contract. For Amazon, the Globalstar buy gives it immediate spectrum, a fleet of 48 operational satellites and a proven ground‑segment infrastructure, accelerating its ambition to rival SpaceX’s Starlink Mobile and OneWeb’s services. The collaboration also signals a rare alignment between two of the world’s biggest tech firms in the increasingly contested satellite‑communications market. What to watch next are the regulatory clearances that both the Globalstar merger and the Apple‑Amazon service agreement must clear in the United States, Europe and Asia. Analysts will track how quickly Amazon integrates Globalstar’s assets into the Leo network and whether Apple expands satellite use beyond emergency SOS to include text messaging or IoT connectivity. A rollout timeline for the D2D service, likely slated for late 2026, will reveal whether Apple can leverage the partnership to launch new consumer features before competitors such as Starlink Mobile roll out comparable capabilities.
53

10 Reasons to Wait for the iPhone 18 Pro

Mastodon +6 sources mastodon
apple
Apple’s next flagship is already sparking debate, not because it’s been unveiled, but because a new MacRumors feature titled “10 Reasons to Wait for the iPhone 18 Pro” has gone viral. The article, published on 14 April, compiles the most compelling arguments for postponing a purchase of the current iPhone 17 Pro line in favor of the yet‑unreleased successor. It leans on a mix of supply‑chain whispers, analyst forecasts and leaked design sketches, highlighting a thicker chassis that could house a larger battery, an A20 Pro silicon built on TSMC’s third‑generation 3 nm process, and a revamped camera module that may finally close the gap with competing flagships. Why the story matters is twofold. First, consumer sentiment around Apple’s annual upgrade cycle is a barometer for the company’s pricing power; a coordinated wait‑list could blunt the sales surge traditionally seen after September launches. Second, the points raised—especially the promise of a more efficient processor and a substantially bigger battery—signal that Apple is addressing long‑standing criticisms of the iPhone 17 Pro’s thermal throttling and modest endurance, potentially reshaping the competitive landscape against Android flagships that have already adopted 3 nm chips. What to watch next are the concrete leaks that usually surface in the weeks leading up to the WWDC keynote and the September product event. Analysts will be monitoring TSMC’s capacity reports for any uptick that could confirm the A20 Pro’s production schedule, while supply‑chain insiders are expected to reveal the exact dimensions of the rumored thicker frame. If Apple follows the pattern of teasing features through software previews, iOS 26—covered in our recent guide—might already be hinting at new AI‑driven camera capabilities that will only be unlocked on the iPhone 18 Pro. The next few months will determine whether the wait‑list narrative becomes a self‑fulfilling prophecy or simply a buzz‑worthy headline.
53

Bose’s noise-crushing QC Ultra Earbuds are nearly 20 percent off right now

Mastodon +6 sources mastodon
apple
Bose has slashed the price of its second‑generation QuietComfort Ultra earbuds to $249, a discount of almost 20 percent that will be available for a limited window. The promotion, announced on the Verge and echoed across tech outlets, puts the flagship model—originally launched at $299—within reach of a broader audience of commuters, gym‑goers and remote‑workers. The QC Ultra earbuds combine Bose’s industry‑leading active noise cancellation with a new “Immersive Audio” engine that expands the soundstage through proprietary digital‑signal‑processing. Users can toggle between eleven preset attenuation levels, from full silence to a transparent “Aware” mode that blends ambient sounds with music, and even lock custom settings for specific activities. The design adds a sleek, low‑profile shell in colors such as Turtle Beach and Stealth Pivot, while the battery life remains at 6 hours of playback plus a 24‑hour charge from the case. Why the discount matters is twofold. First, it sharpens the competition in the premium true‑wireless market, where Apple’s AirPods Pro 2 and Sony’s WF‑1000XM5 dominate. Bose’s aggressive pricing could sway consumers who value superior ANC but balk at Apple’s ecosystem lock‑in. Second, the earbuds’ integration with voice assistants—Apple’s Siri, Google Assistant and Amazon Alexa—means they will serve as everyday AI interfaces, feeding the growing demand for hands‑free interaction with large language models and other cloud‑based services. Watch for Bose’s next move: the company hinted at a firmware update that will introduce spatial audio rendering, a feature currently championed by Apple’s Spatial Audio. If the update arrives before the discount expires, it could further erode Apple’s lead in immersive listening and set a new benchmark for AI‑enhanced earbuds. Keep an eye on retailer stock levels, as the limited‑time deal is expected to sell out quickly.
53

Apple Watch Earth Day and International Dance Day Activity Challenges Launching Later This Month

Mastodon +6 sources mastodon
apple
Apple Watch users will soon be prompted to celebrate two global observances with new activity challenges. The Earth Day challenge drops on Wednesday, 22 April, requiring a workout of at least 30 minutes to earn a digital badge and a set of iMessage stickers. A week later, on Wednesday, 29 April, the International Dance Day challenge asks participants to log a 20‑minute (or longer) dance session for a comparable award. The rollout is part of Apple’s broader strategy to weave health‑tracking into cultural moments. By tying the Activity rings to Earth Day, Apple nudges users toward longer, outdoor exercise while reinforcing its sustainability narrative. The dance‑focused challenge, meanwhile, showcases the Watch’s motion‑sensor capabilities and aligns the brand with creative expression, a move that could broaden the appeal of its fitness ecosystem beyond traditional workouts. These challenges matter because they generate fresh engagement spikes for watchOS 11, potentially boosting subscription uptake for Fitness+ and reinforcing the value proposition of the Apple Watch as a lifestyle hub. The digital rewards—animated stickers that appear in iMessage—also deepen the social sharing loop, encouraging friends to compete and replicate the activities, which can translate into higher daily active users and richer health data for Apple’s services. Looking ahead, Apple is expected to announce further themed challenges, including a Yoga Day badge slated for 21 June. Observers will watch participation metrics released in Apple’s quarterly health‑services report, as well as any partnership announcements with environmental NGOs or dance organizations that could amplify the initiatives. The success of these April challenges may set the template for a year‑round calendar of activity‑driven events that blend wellness, culture and brand storytelling.
52

Behind fiery attack on OpenAI’s Altman, a growing divide over AI

The Washington Post on MSN +8 sources 2026-03-29 news
googleopenai
The early‑morning Molotov‑cocktail attack on OpenAI chief Sam Altman’s San Francisco home on April 10 has moved from a shocking crime to a flashpoint in the tech sector’s cultural war. Police say 31‑year‑old Daniel Moreno‑Gama hurled a flaming bottle at the metal gate of Altman’s residence on Russian Hill, igniting a brief blaze but causing no injuries. He was arrested hours later and, as we reported on April 14, faces an attempted‑murder charge. The incident has ignited a fierce debate among Silicon Valley insiders. A handful of prominent founders and investors have publicly linked the assault to a broader “anti‑AI” movement, accusing critics of stoking hostility that can spill into violence. Their comments echo a growing narrative that the rapid rollout of generative‑AI tools—exemplified by ChatGPT’s meteoric rise since 2022—has polarized public opinion, a trend highlighted in today’s Stanford AI Index, which shows a sharp uptick in negative sentiment toward AI. Why it matters goes beyond personal safety. If AI leaders are perceived as targets, the industry may face heightened security costs, talent‑retention challenges, and pressure to self‑regulate content that fuels extremist rhetoric. Policymakers, already wrestling with questions of AI accountability, could use the episode to justify stricter oversight, while investors may reassess exposure to firms seen as politically vulnerable. The next weeks will test whether the backlash escalates or recedes. Key indicators to watch include the outcome of Moreno‑Gama’s trial, any formal security protocols announced by OpenAI, and statements from AI‑ethics bodies such as the Partnership on AI. Equally important will be the response from vocal critics—whether they temper their rhetoric or double down—as the sector navigates a widening divide that now carries a tangible threat of violence.
48

OpenAI Launches GPT-5.4-Cyber and Updates Cybersecurity Approach

Mastodon +7 sources mastodon
anthropicgpt-5openai
OpenAI rolled out GPT‑5.4‑Cyber on Tuesday, adding a “high‑cyber‑threat” rating to its most capable professional model and unveiling a refreshed cybersecurity framework that builds on the strategy we first detailed on 15 April 2026 [In the Wake of Anthropic’s Mythos, OpenAI Has a New Cybersecurity Model—and Strategy]. The new flagship, GPT‑5.4‑Cyber, expands the token window to 1 million, blends state‑of‑the‑art coding, computer‑use, and tool‑search abilities, and is offered in Pro and Thinking tiers for enterprise customers. Alongside it, OpenAI released lightweight Mini and Nano variants that promise up to twice the response speed of earlier GPT‑5‑Mini models while preserving most of the security hardening of the flagship. Pricing for the API has been adjusted to reflect the higher compute load, and the models are now live across ChatGPT, the API, and Codex. The launch arrives amid a turbulent week for OpenAI. A Pentagon contract with the company has drawn criticism after the Department of Defense labeled rival Anthropic a supply‑chain risk, and Sensor Tower data show U.S. mobile‑app uninstall rates spiking 295 % on 28 February. By positioning GPT‑5.4‑Cyber as a hardened, auditable service, OpenAI signals that it is trying to reassure both government buyers and a wary public that the model’s expanded capabilities will not translate into new attack vectors. What to watch next: adoption curves for the Pro and Thinking tiers will reveal whether enterprises trust the new security posture; regulators may probe the “high‑cyber‑threat” classification and demand transparency on mitigation measures; and OpenAI’s next hardware rollout—new data‑center capacity announced alongside the launch—could set the pace for competing firms. The evolution of Mini and Nano models will also test OpenAI’s ability to balance speed, cost, and security in high‑volume use cases.
48

Sam Altman: Man charged with attempting to murder OpenAI boss

HN +6 sources hn
openai
A Texas man has been formally charged with two counts of attempted murder for hurling a Molotov cocktail at the San Francisco home of OpenAI chief executive Sam Altman. Daniel Moreno‑Gama, 20, was arrested after police recovered a jug of kerosene, a lighter and a handwritten note warning of “extinction‑level AI” alongside the incendiary device. The attack also endangered a security guard stationed at the residence, prompting additional assault‑with‑a‑deadly‑weapon charges. As we reported on 15 April, Moreno‑Gama was detained following the fire‑bombing attempt and made his first court appearance that day. The new indictment escalates the legal response from a misdemeanor arson charge to a serious violent‑crime prosecution, underscoring the severity with which authorities view threats against high‑profile AI leaders. The case matters because it highlights a growing wave of hostility toward the AI sector, where rapid advances have sparked both admiration and alarm. Recent attacks on OpenAI executives have amplified concerns about the safety of innovators and the potential chilling effect on research. Law‑enforcement scrutiny and harsher penalties may force companies like OpenAI to tighten security protocols, allocate resources to personal protection, and reconsider public engagement strategies. Watch for the upcoming arraignment, where a judge will decide on bail and whether Moreno‑Gama will be held without release. The district attorney has indicated that additional suspects could emerge as investigators trace the note’s origins. OpenAI is expected to issue a statement on its security posture, while policymakers may cite the incident in debates over protective measures for technology leaders. The outcome could set a precedent for how the justice system addresses violence motivated by AI‑related anxieties.
45

Apple Removes Fake Crypto Wallet App That Stole $9.5 Million From Mac Users

Mastodon +6 sources mastodon
apple
Apple has pulled a counterfeit Ledger Live application from the macOS App Store after investigators linked it to a week‑long scam that siphoned roughly $9.5 million in cryptocurrency from more than 50 users. The malicious app, which appeared under the legitimate Ledger brand, prompted victims to enter their seed phrases – the master keys that unlock crypto wallets – and then used the information to transfer assets across multiple blockchains. Blockchain analyst ZachXBT traced the theft to a six‑day window in early April, noting that the fraudsters moved funds through a series of mixers before cashing out on exchanges. Apple’s swift removal on April 13 follows internal reviews triggered by user reports and blockchain forensics. In a brief statement, the company said it “takes the security of our ecosystem seriously” and is “enhancing review processes for cryptocurrency‑related apps.” The episode underscores lingering doubts about the App Store’s ability to police sophisticated scams, especially as crypto usage expands among mainstream consumers. The fallout matters on several fronts. For Apple, the incident fuels ongoing scrutiny from regulators who have pressed the tech giant to tighten app‑review standards and improve transparency around app provenance. For Ledger, the brand damage could be significant, prompting the hardware‑wallet maker to issue warnings and possibly pursue legal action against the fraudsters. For crypto users, the case is a stark reminder that even vetted platforms can be weaponised against them. What to watch next includes Apple’s rollout of any new verification layers for crypto‑related software, potential class‑action lawsuits from victims, and coordinated law‑enforcement efforts to trace the stolen funds. The incident may also accelerate discussions in Europe and the United States about mandatory security certifications for financial apps distributed through major app stores.
45

Samsung's U.S. Price Increases Add to Concerns About Rising Apple Device Costs

Mastodon +6 sources mastodon
apple
Samsung announced a fresh round of price hikes for its U.S. DRAM and NAND products, a move that intensifies worries that Apple’s upcoming devices could become noticeably more expensive. The increase, disclosed in a filing to the U.S. Federal Trade Commission, lifts the cost of Samsung’s flagship LPDDR5X memory by roughly 15 % and raises NAND pricing by a similar margin. Samsung’s own Galaxy smartphones and tablets are also seeing retail‑price adjustments, underscoring that the memory surge is reverberating across the entire mobile ecosystem. The development matters because Apple has already committed to paying roughly twice the pre‑hike price for Samsung’s LPDDR5X chips, as reported in February. Higher component costs squeeze Apple’s margins and force the company to decide whether to absorb the expense, trim features, or pass the increase on to consumers. Analysts predict that the iPhone 17, slated for launch later this year, could see a price bump of $50‑$100, while the next‑generation MacBook line may follow suit. For a brand that has traditionally positioned its premium devices as cost‑stable, any upward shift could reshape buying patterns, especially in the price‑sensitive U.S. market. What to watch next includes Apple’s official pricing announcements at the September event, any statements from Tim Cook’s team about cost‑absorption strategies, and whether Apple begins diversifying its memory supply away from Samsung. Market observers will also monitor Samsung’s own device pricing to gauge whether the company is simply shifting the burden onto its rivals or preparing for broader industry inflation. Finally, regulators may scrutinise the pricing dynamics if they appear to threaten competition in the high‑end smartphone and PC segments.

All dates