AI News

174

Zero Waste, Zero Stress: Building ReMake with AI-Assisted Github Copilot Development

Zero Waste, Zero Stress: Building ReMake with AI-Assisted Github Copilot Development
Dev.to +5 heimildir dev.to
agentscopilot
A developer has just unveiled **ReMake**, an AI‑driven upcycling platform that turns everyday waste—think cardboard boxes—into functional products such as laptop stands. The app was built in just a few weeks using GitHub Copilot’s new Chat and Agent features, which the author credits for handling everything from API scaffolding to automated pull‑requests. By prompting Copilot to research a public waste‑catalog repository, generate a React front‑end, and stitch together a serverless backend on Azure, the creator reduced what would normally be months of work to a matter of days. The launch matters because it showcases a concrete, sustainability‑focused use case for AI‑assisted development. Copilot’s agents can now act as autonomous assistants, fetching data, planning code changes, and even testing implementations without manual intervention. This lowers the technical barrier for small teams and social entrepreneurs who lack deep engineering resources, potentially accelerating a wave of green‑tech startups. It also validates the Model Context Protocol (MCP) that allows Copilot to tap external data sources—a capability highlighted in our earlier coverage of AI‑enhanced inbox triage on March 17, 2026, and the agent experiments described on April 20, 2026. What to watch next is whether ReMake’s rapid prototype will evolve into a full‑scale service and attract community contributions. GitHub has signaled further enhancements to Copilot agents, including tighter integration with Microsoft 365 and broader language support, which could make similar sustainability projects even more accessible. Industry observers will also be tracking how regulators respond to AI‑generated code that interfaces with consumer hardware, and whether open‑source versions of the ReMake stack emerge. If the early momentum holds, AI‑augmented development may become a cornerstone of the circular economy, turning “zero waste” from a slogan into a programmable reality.
117

OpenAI‑tilkallað auglýsingasamband selur ChatGPT‑auglýsingar eftir spurningarrelevans

OpenAI‑tilkallað auglýsingasamband selur ChatGPT‑auglýsingar eftir spurningarrelevans
HN +6 heimildir hn
openai
OpenAI‑auglýsingakerfið tók áþreifanlegan skref á þriðjudaginn þegar tiltekinn auglýsingasamband tilkynnti nýjan birgðalíkan sem selur ChatGPT‑auglýsingar byggðar á „spurningarrelevans“. Þjónustan, sem birtir styrkt efni við svarreit botnsins, tengir auglýsingar við merkingarlegan tilgang notandaspurningar í stað lýðfræðilegra gagna eða vafurferils. Sambandið, dótturfélag stórs forritunarvettvangs, mun bjóða í auglýsingasvæði í rauntíma, með verðlagningu stillta eftir því hversu nákvæmlega lykilorða prófíll auglýsingarinnar samræmist spurningu notandans. Þessi aðgerð byggir á stefnumótun OpenAI sem var sett fram seint í apríl, þegar fyrirtækið tilkynnti áætlanir um að setja útvaldar auglýsingar inn í ChatGPT og lagði áherslu á að þær yrðu sjónrænt aðgreindar og aldrei myndu hafa áhrif á svör módelins. Með því að tengja auglýsingar við spurningarrelevans vonast OpenAI eftir að nýta gríðarlegan notendahóp án þess að skemma ímynd óhlutdrægis aðstoðar. Auglýsingavörsluar fá nú samhengi‑ríkt merki frekar en hefðbundin lykilorða miðuð markhópagreining, á meðan OpenAI nýtir áætlað $25 billiönn auglýsingamarkað sem innri aðilar hafa lengi séð sem vöxtarvél. Aðferðin vekur þó spurningar um gagnaumsjón og ritstjórnun. Markhópagreining á spurningarstigi krefst vinnslu notendainntaks á hátt sem gæti verið skynjaður sem viðskiptaleg prófíling, sem gæti stangist á móti persónuverndaráskorunum OpenAI. Gagnrýnendur varða einnig að auglýsingar byggðar á relevansi gætu óvart leitt samtalsvið til ákveðinna efna, jafnvel þó að kjarnasvörin í módelinu sjálfu haldist óbreytt. Það sem á að fylgjast með næst er útgáfuáætlunin, þar sem OpenAI hefur sagt að hún hefjist með takmarkaðri beta í enska talandi mörkuðum, og hvernig fyrirtækið framfyljir stefnu um „auglýsingar munu ekki hafa áhrif á svör“. Reglugerðarstjórnir í ESB og Noregi eru líklegar til að skoða framkvæmdina í ljósi nýrra AI‑sértækra auglýsingareglna. Keppinautar eins og Anthropic og Google Gemini gætu svarað með eigin tekju‑líkönum og mótað næstu landamæri AI‑drifinna auglýsingatækni.
109

RAG vs. Lucene: Architecting AI Knowledge Bases for On-Premises Customer Support Systems

RAG vs. Lucene: Architecting AI Knowledge Bases for On-Premises Customer Support Systems
Dev.to +5 heimildir dev.to
rag
ShenDesk, a fledgling startup founded by a veteran of enterprise support software, unveiled its first on‑premises AI knowledge‑base platform this week, positioning it as a middle ground between fully managed RAG services and classic Lucene‑based search. The system lets operators choose either a Retrieval‑Augmented Generation (RAG) pipeline—where an LLM queries a vector store built from ingested documents—or a traditional Lucene index that returns deterministic hits before a lightweight language model formats the answer. The announcement matters because Nordic enterprises are increasingly required to keep customer data behind firewalls while still offering instant, AI‑driven assistance. Cloud‑only RAG offerings from the big AI providers promise ease of use but raise compliance concerns; pure Lucene stacks, on the other hand, lack the contextual depth that LLMs provide. ShenDesk’s hybrid approach claims to deliver “the best of both worlds”: the speed and auditability of Lucene for exact matches, combined with the nuanced reasoning of a RAG layer for ambiguous queries. The platform ships with a Dify‑compatible orchestration layer, enabling teams to plug in any on‑prem LLM, and includes a visual ingestion pipeline that extracts text, creates embeddings, and syncs them with a Lucene index in a single step. As we reported on 19 April 2026, treating a vector database as a search engine can cripple RAG performance. ShenDesk’s design explicitly separates deterministic retrieval (Lucene) from semantic augmentation (RAG), sidestepping that pitfall. If the architecture lives up to its promises, it could set a template for privacy‑first AI support across regulated sectors such as finance and healthcare. Watch for early benchmark releases from ShenDesk, partner integrations with Nordic telecoms, and any regulatory feedback on on‑prem LLM deployments. The next few months will reveal whether the hybrid model can scale beyond pilot projects and become a viable alternative to the cloud‑centric AI services that dominate the market today.
105

Tim Cook sendir kveðjukort um feril sinn hjá Apple

Mastodon +7 heimildir mastodon
apple
Stjórnstjórn Apple tilkynnti þriðjudaginn að Tim Cook muni hætta sem forstjóri eftir fjórtán ár í stjórn, og yfirgefið framkvæmdastjóri gaf út umhugsunarkenndan kveðjukort til starfsmanna og hluthafa. Í 2 300 orðum langri skýrslu þakkaði Cook „frábæru hæfileikann sem knýr Apple“ og nefndi áfanga eins og útgáfu Vision Pro blönduðra raunveruleika heyrnartóls, markaðsvirði fyrirtækisins að 4 trilljón dollara og vaxandi þjónustuþróun. Hann fjallaði einnig um samfélagsábyrgðir sem hann tók að sér, frá gagnaleyndarvarúð til loftslagsábyrgðar, og sýndi traust til umráðamanns síns, rekstrarstjóra Jeff Williams, sem tekur við starfinu 1. maí. Útþenslan merkir lok lengstu órofaðu starfstíðar í nútíma sögu Apple. Stöðug stjórn Cook breytti tækjabundna fyrirtækinu í þjónustustýrðan kraftaverk, á sama tíma og hann sigldi í gegnum stefnumálalegan þrýsting, truflanir í birgðakeðju og fjölda áberandi lögfræðilegra deilna — þar á meðal nýlegan dómstólarsátt sem kom í veg fyrir annan innflutningsbann á endurbótum Apple Watch (sjá umfjöllun okkar frá 20. apríl). Brottför hans vekur spurningar um stefnaáreynslu, sérstaklega þar sem Apple stefnir í nýja flokka eins og aukað raunveruleika og sköpunargervigreind, þar sem varfærin, gagnaleyndar‑fyrsta nálgun Cook hafði mótað vörulínur. Hlutverkseigendur munu fylgjast með því hvernig Williams samræmir samfelldni og nýsköpun. Fyrstu vísbendingar verða
99

As Tim Cook steps down from Apple, data shows how much it beat the stock market by under his leadership

As Tim Cook steps down from Apple, data shows how much it beat the stock market by under his leadership
Mastodon +8 heimildir mastodon
apple
Apple’s board confirmed that Tim Cook will hand over the reins after almost 15 years at the helm, and a fresh analysis shows just how far the company’s shares have outpaced the broader market under his stewardship. From the day Cook succeeded Steve Jobs in August 2011 until his announced departure this week, Apple stock has risen roughly 260 percent, while the S&P 500 has logged a gain of about 70 percent over the same span. The outperformance stems from a relentless expansion of services, wearables and health‑tech, alongside a steady stream of flagship iPhone launches that kept profit margins robust even as the smartphone market matured. The figures matter because they quantify the value Cook added beyond product cycles, reinforcing why investors have rewarded Apple with a market capitalisation that now exceeds $3 trillion. For a company whose brand is synonymous with premium hardware, the shift toward recurring revenue and ecosystem lock‑in has proved a durable growth engine. That track record will set a high bar for the incoming chief executive, who must sustain both the financial momentum and the cultural emphasis on privacy, sustainability and, increasingly, artificial‑intelligence integration. What to watch next are the board’s choice of successor and the strategic priorities they will articulate in the first earnings season without Cook. Analysts will be keen on whether the new CEO will double down on AI‑driven services such as on‑device language models, or pivot toward fresh hardware categories. The rollout of Apple’s next‑generation silicon and the company’s expanding footprint in health data will also be litmus tests for continuity. As we reported on Cook’s farewell letter on 21 April, his exit marks a pivotal moment for Apple’s next growth chapter.
91

Stop Being So Dependent on Your iPhone: Turn It Into a Dumb Phone Instead

Mastodon +7 heimildir mastodon
apple
Apple users looking to curb their screen‑time now have a step‑by‑step roadmap that turns the flagship iPhone into a functional “dumb phone.” A CNET feature published today outlines how to lock down iOS using native tools—Screen Time limits, Focus modes, App Store restrictions and Guided Access—so the device can make calls, send texts and run a handful of essential utilities while all social‑media, AI assistants and most third‑party apps stay dormant. The guide arrives as the conversation around digital wellbeing intensifies across the Nordics, where average daily smartphone use tops eight hours. By stripping the iPhone of its constant notification stream, users can reclaim attention, reduce data harvested by apps, and lower the risk of inadvertent privacy leaks. The move also sidesteps the growing dependence on large language model (LLM) chatbots embedded in iOS, a concern highlighted in our recent coverage of Apple’s AI integration. Why it matters goes beyond personal habit. A mass shift toward “dumb‑phone” configurations could dent app‑store revenue, pressure advertisers, and force developers to rethink engagement models that rely on push notifications. For Apple, the trend tests the limits of its ecosystem lock‑in: the more users disable services, the less friction they have when switching to alternative hardware. What to watch next is whether Apple formalises this DIY approach. iOS 26.4.2, slated for release next week—a version we previewed on April 20—adds finer‑grained privacy toggles that could make a one‑click “Dumb Mode” feasible. Regulators in the EU and Norway are also probing mandatory wellbeing settings, and early‑stage prototypes from rival manufacturers suggest a broader industry pivot. Keep an eye on Apple’s upcoming WWDC keynote for any official “digital‑detox” features that could turn the concept from a user‑led hack into a built‑in option.
90

Tim Cook to become Apple Executive ChairmanJohn Ternus to become Apple CEO

Mastodon +7 heimildir mastodon
apple
Apple announced that longtime chief executive Tim Cook will step down on 1 September 2026 to become executive chairman of the board, while senior vice‑president of hardware engineering John Ternus will assume the CEO role the same day. The transition, detailed in a newsroom release, marks the end of Cook’s 15‑year tenure that lifted Apple’s market value by more than $3.6 trillion and saw the iPhone, Services and Vision Pro reshape the tech landscape. The move matters because it signals a shift from Cook’s operational, supply‑chain‑driven stewardship to a leader whose pedigree is rooted in hardware design. Ternus, who oversaw the development of the latest Mac Book Pro, iPad Pro and Apple Watch Series 9, is expected to steer Apple deeper into custom silicon and augmented‑reality hardware, while preserving the services growth that has become a profit engine. Cook’s new position as executive chairman gives him a strategic voice on the board without day‑to‑day management, a structure that could accelerate Apple’s AI ambitions, including the rollout of Apple Intelligence and tighter integration of Vision Pro with its ecosystem. As we reported on 21 April, Cook’s departure was already anticipated, but today’s formal appointment clarifies the succession timeline and the company’s leadership architecture. Investors will watch Apple’s stock reaction and any immediate guidance on product pipelines, especially the next generation of M‑series chips and AI‑focused features. Analysts will also monitor how Ternus balances hardware innovation with the growing services and AI portfolio, and whether Cook’s chairmanship will influence Apple’s stance on regulatory scrutiny in Europe and the United States. The first major test will come at the September product event, where the new CEO is likely to outline his vision for the next era of Apple.
87

Kimi vendor verifier – verify accuracy of inference providers

HN +5 heimildir hn
inferenceopen-source
Moonshot AI has released the Kimi Vendor Verifier (KVV) alongside its new K2.5 large‑language model, opening the code on GitHub to let developers check that an inference provider is delivering the model’s advertised accuracy. The verifier runs a suite of reference prompts and compares the outputs against the baseline results published by Moonshot, flagging any deviation that could stem from quantisation, pruning, or mismatched tokenisation in third‑party deployments. The tool arrives at a moment when the open‑source LLM market is fragmenting across dozens of cloud and edge providers that compete on latency and price. While cheaper or faster endpoints are tempting, subtle shifts in model behaviour can undermine downstream applications—from code generation to tool‑calling agents— and skew benchmark scores that vendors use for marketing. By automating precision checks, KVV gives users a “chain of trust” from model download to production inference, echoing recent efforts such as the llmfit command‑line utility that maps models to compatible hardware. For developers, the verifier reduces the risk of silent performance regressions when switching providers or scaling workloads, and it supplies a common yardstick for the community to audit new inference services. For providers, transparent accuracy reporting could become a differentiator, especially as European regulators push for verifiable AI performance in the EU’s sovereign‑cloud contracts awarded earlier this month. What to watch next: Moonshot plans to integrate KVV into its K2.5 API dashboard, allowing real‑time health checks for customers. Industry observers will be looking for adoption signals from major cloud players and for the emergence of similar verification frameworks for other open‑source models. If KVV gains traction, it could set a new baseline for reliability in the rapidly expanding inference‑as‑a‑service ecosystem.
84

I wrote a novel using AI. Writers must accept artificial intelligence – but we are as valuable as ever

Mastodon +6 heimildir mastodon
Stephen Marche, the veteran columnist and author, has taken his latest experiment public: a full‑length novel drafted with the help of generative‑AI tools. In a Guardian opinion piece titled “I wrote a novel using AI. Writers must accept artificial intelligence – but we are as valuable as ever,” Marche details how he fed plot outlines, character sketches and chapter drafts into large‑language models, then edited the output to imprint his voice. The resulting manuscript, he says, is “readable, coherent and surprisingly nuanced,” and he plans to submit it to a traditional publisher later this year. The essay arrives amid a wave of data showing AI’s rapid penetration into academia. A recent survey cited by Marche found that 86 % of college students use AI writing assistants regularly, suggesting that a sizable minority may be under‑reporting their reliance. For the literary establishment, the piece is a wake‑up call: if students can produce essays and stories with a few prompts, the same technology can scale to full‑length fiction, potentially reshaping how books are conceived, marketed and edited. Industry observers see three immediate implications. First, publishing houses will need to revise acquisition pipelines to evaluate AI‑augmented manuscripts without bias. Second, writers’ unions and copyright bodies are likely to grapple with questions of authorship, royalty splits and moral rights when a machine contributes substantive text. Third, educational institutions may tighten policies on AI disclosure, echoing the broader debate over academic integrity. What to watch next includes reactions from the Authors’ Guild, which is expected to issue guidance on AI‑assisted writing, and any pilot programmes by major publishers experimenting with AI‑driven editorial tools. The next few months could also bring legal challenges over who owns the output of a model trained on copyrighted works. As Marche’s experiment shows, the conversation has moved from “if” to “how” AI will coexist with human creativity.
77

Hundreds of Fake Pro-Trump Avatars Emerge on Social Media

Mastodon +6 heimildir mastodon
Hundreds of AI‑generated avatars posing as pro‑Trump influencers have flooded TikTok, Instagram, Facebook and YouTube in the weeks leading up to the U.S. midterm elections. The accounts, which feature polished, conventionally attractive men and women delivering rapid‑fire commentary on “radical left” policies, the war in Iran, abortion and other hot‑button issues, are indistinguishable from real creators at first glance. Researchers who traced the phenomenon say the avatars are produced by off‑the‑shelf text‑to‑image and voice‑synthesis tools, then scripted with large‑language‑model prompts that mimic the rhetorical style of former President Donald Trump and his supporters. The surge matters because synthetic political personas can amplify partisan messaging, inflate perceived support and manipulate algorithmic recommendation engines. Early surveys cited by the New York Times indicate a measurable share of viewers believe the accounts are genuine, raising the risk of misinformation spreading unchecked. Platforms have responded with mixed speed: TikTok announced a review of “synthetic political content,” while Meta’s policy team is still drafting guidelines for AI‑generated political media. The episode also revives calls in Europe and the United States for clearer disclosure rules on synthetic media, especially ahead of high‑stakes elections. What to watch next includes whether the Federal Election Commission will treat AI‑driven influencer campaigns as coordinated political advertising, and how quickly social‑media firms can deploy detection tools that flag deep‑fake avatars in real time. Researchers expect a wave of similar synthetic accounts targeting other candidates and issues, suggesting the current flood may be the first of a broader, AI‑powered playbook for political persuasion. Monitoring platform policy updates and any legal actions will be crucial to gauge how the digital battlefield evolves before voters head to the polls.
59

Bandaríska ólígaríið fer í hiperskala

Mastodon +6 heimildir mastodon
metaopenai
Bandarískir tækniríki eru að flýta fyrir nýrri bylgju af byggingu hiperskala gagnavera, þróun sem Mother Jones fjallar um í nýjustu rannsókn sinni „How the American oligarchy went hyperscale.“ Greinin varpar ljósi á leynilegan samkeppni milli Meta, OpenAI, Oracle og annarra gervigreindarvelda um að byggja stærri megasafn en hinar, þar sem hver staður er hannaður til að hýsa petaflops af útreikningum sem þarf til næstu kynslóða líkana. Keppnin er knúin áfram af sprengjandi vaxandi eftirspurn eftir gerandi gervigreind, verulegum lækkun á kostnaði við búnað og stefnumótunarleysi sem skilur svæðisbundið skipulag, orkunotkunarstaðla og samkeppniseftirlit frá mörkum. Áhrifin ná langt út fyrir fyrirtækjaárásir. Hiperskala staðir neyta megavatta af rafmagni, oft frá jarðefnaeldsneyti, sem eykur kolefnisspor iðnaðarins á tímum þegar stefnumótendur um loftslag þrýsta á minnkun útlags. Að safna gríðarlegri útreikningaflækju í höndum fára eigenda dýpkar markaðsþéttir, og vekur áhyggjur um gagnasjálfstæði, stjórn algríma og samningsstöðu smærri nýsköpunarfyrirtækja. Greinin bendir á að netverð Elon Musk hafi hækkað yfir 800 billiön dollara, og ný greiðslupakka Tesla gæti gert hann að fyrsta trílljónamanni heims, sem sýnir hvernig persónuleg auðæfi og fyrirtækjaáhersla á gervigreind fléttast saman. Áframhaldandi þróun: Federal Energy Regulatory Commission er áætlað að gefa út nýjar leiðbeiningar um innkaup raforku fyrir gagnaver, á meðan Digital Services Act Evrópusambandsins gæti hvatt til transatlantsktars þrýstings á strangari eftirlit með AI‑innviðum. Þingsnefndir í Bandaríkjunum hafa sýnt áhuga á hlustun um „AI‑valda orkuskynjun“, og samkeppniseftirlit er að skoða hvort safnun útreikningaeigna sé hindrun fyrir samkeppni. Næstu mánuðir munu sýna hvort stefna nái að halda í takti við hiperskalaátak, eða hvort AI‑ólígaríið festist í óstýrðum yfirráðum.
59

Tech (Reviews)

Mastodon +6 heimildir mastodon
apple
Business Insider has published its annual roundup of the 18 products that resonated most with its readership in 2023, a list that places Samsung at the top of the conversation and underscores Apple’s continued dominance in the premium segment. The compilation, drawn from purchase data across the outlet’s tech guides, shows Samsung smartphones, wearables and home appliances accounting for a third of the selections, while Apple’s iPhone 15 series, MacBook Air and AirPods Pro round out the high‑end tier. Notably, several entries are tied to generative‑AI tools and large‑language‑model (LLM)‑enabled devices, reflecting a surge in consumer interest for AI‑augmented gadgets. The ranking matters because it offers a real‑time barometer of Nordic and global buying patterns, informing retailers, manufacturers and investors about where demand is consolidating. Samsung’s strong showing signals that its aggressive pricing and expanded ecosystem are paying off against Apple’s premium lock‑in, a dynamic that could reshape market share in Scandinavia where price sensitivity coexists with a taste for cutting‑edge features. The presence of AI‑centric products hints that LLM‑driven assistants and smart‑home hubs are moving from niche to mainstream, a trend already echoed in recent coverage of self‑healing neural networks and simulation‑based training tools. Looking ahead, analysts will watch Q1 2024 product launches from both giants for clues on how they will address the AI wave—Samsung’s Galaxy AI suite and Apple’s rumored on‑device LLM chips. Nordic e‑commerce platforms are expected to roll out more sophisticated recommendation engines powered by the same LLM technology highlighted in the list, potentially amplifying the feedback loop between consumer preferences and product development. The next few months should reveal whether Samsung can sustain its momentum or if Apple’s ecosystem will reassert its premium pull.
54

アメリカでは確定申告にChatGPTを使う人が急増中…相談内容は主に3つのテーマだった | Business Insider Japan https://www. yayafa.com/2785

Mastodon +7 heimildir mastodon
agentsopenai
ChatGPT í miklum notkun í skattskilum í Bandaríkjunum – þrjár aðalspurningar OpenAI tilkynnti að fjöldi Bandaríkjamanna sem leita til ChatGPT til aðstoðar við skattskil hefur sprengt í þessu skattskilatímabili, þar sem fyrirspurnir um ferlið hækka um um 400 % miðað við fyrra ár. Hraðinn vöxtur var greindur í innri notkunargögnum sem sýndu skarpt hækka í spurningum tengdum „frádráttum“, „skattskilastað” og „áhættu á endurskoðun“. Notendur biðja spjallmenninguna um að útskýra hvaða kostnað er frádráttarbær, hvernig á að velja á milli sameinaðra og aðskildra skattskil, og hvort tiltekin viðskipti gætu leitt til endurskoðunar IRS. Þessi þróun er mikilvæg því hún bendir til hröðrar breytingar á því hvernig venjulegir skattskilafólk leitar sér faglegra ráðgjafar. Með því að lækka hindrunina getur gervigreind gerð grunnþekkingar á skatti aðgengilegri, en hún vekur einnig áhyggjur af nákvæmni og ábyrgð. OpenAI bætti við ávarp um að ChatGPT sé ekki vottun skattskilasérfræðingur og að svör hans eigi að vera yfirfarin í samræmi við opinberar leiðbeiningar. IRS hefur þegar gefið út yfirlýsingu þar sem skattyfirvöld hvetja skattskilafólk til að sannreyna upplýsingarnar sem gervigreind framleiðir, og minnka að stofnunin styðji ekki neitt tiltekið tæki. Lögfræðingar varða að treysta á ómannleg ráðgjöf gæti flækt ágreining um misreikninga, á meðan skattskilaforritafyrirtæki keppa um að innleiða gervigreind í lausnir sínar til að vera samkeppnishæf. Áframhaldandi þróun felur í sér mögulega reglugerðarátök frá Fjárhagsráðuneytinu eða Federal Trade Commission, sem gætu krafist skýrari upplýsinga eða frammistöðustandarda fyrir AI‑studd skattskilaaðstoð. Greiningarmenn munu einnig fylgjast með hvort stórir skattskilafyrirtæki eins og Intuit og H&R Block kynni eigin samtalsaðila, og hvort IRS gefi út opinbert API fyrir sannprófaða AI‑þjónustu. Næstu mánuðir geta mótað jafnvægið milli þæginda og samræmis í tímum AI‑styrktrar fjármála.
54

Changes to GitHub Copilot Individual Plans

HN +6 heimildir hn
copilot
GitHub has rolled out a reshaped pricing structure for its Copilot individual subscriptions, splitting the service into two tiers – Copilot Pro and the newly introduced Copilot Pro+. The change, announced on the company’s blog on 20 April, raises the base price of Copilot Pro from $10 to $12 per month and adds a $20‑per‑month Pro+ option that bundles Copilot Chat, priority access to the latest AI models and an expanded context window of up to 64 k tokens. The move reflects GitHub’s strategy to monetize the rapid evolution of generative‑code assistants while rewarding power users with features that were previously limited to enterprise customers. Pro+ subscribers will be the first to receive updates from the Claude Opus 4.7 model, which promises more accurate completions and better handling of multi‑file refactorings – a capability highlighted in our recent coverage of Copilot’s integration with Claude‑based code generation (see 21 April). For developers in the Nordics, where remote and distributed teams rely heavily on AI‑driven productivity tools, the tiered pricing could influence budgeting decisions and adoption curves. The new plans also tighten licensing rules: individual accounts must now link a verified payment method and can no longer share a single license across multiple machines without purchasing additional seats. Existing users are given a 30‑day window to migrate, after which the legacy “Copilot for Individuals” plan will be retired. What to watch next: GitHub has hinted at a forthcoming “Copilot Studio” beta that will let users run a local LLM with persistent memory, echoing the community‑driven localmind project that surfaced in mid‑April. Additionally, the upcoming GitHub Universe conference in September is expected to reveal whether the Pro+ tier will expand to include fine‑tuning capabilities or tighter integration with Azure AI services. Developers should monitor the rollout for any regional pricing adjustments and the impact on open‑source contribution workflows.
51

OpenAI-forstjóri Sam Altman segir að gervigreind í Hollywood auki virðingu fyrir mannlegum sköpunarmönnum

Mastodon +7 heimildir mastodon
openai
OpenAI-forstjóri Sam Altman sagði í viðtali við Variety að vaxandi fjöldi gerandi gervigreindartækja í Hollywood muni láta áhorfendur “meta mannlega sköpunarmenn meira, ekki minna.” Á miðla- og tækniþingi í Los Angeles settist Altman fram Sora, vídeóframleiðslumódel fyrirtækisins, sem samstarfsaðila fremur en sem staðgengil fyrir rithöfunda, leikstjóra og leikarar. Hann hélt því fram að AI-stuðlað sögukortun, sjónræn áhrif og forsýning myndi sýna fram á mannlega hand í verkefni, og gefa áhorfendum skýrari viðurkenningu til þeirra sem komu með hugmyndina. Athugasemdin kemur eitt ár eftir að OpenAI birti Sora, sem vakti mótmæli frá Writers Guild of America, Directors Guild og nokkrum stórum kvikmyndafyrirtækjum sem óttuðust að tækni gæti minnkað mikilvægi skapandi vinnu og mótað skýringarmál um höfundarrétt. Altman reyndir að róa áhyggjur með því að setja AI fram sem framleiðsluaukandi sem losar sköpunarmenn frá endurteknum verkefnum, svo þeir geti einbeitt sér að frásögn og frammistöðu. Ef iðnaðurinn tekur tækin í notkun gætu framleiðslutímar minnkað verulega, fjárhagsáætlanir endurúthlutað til fólksins og smærri sjálfstæð fyrirtæki fengið aðgang að sjónrænni áhrifahæfni sem áður var aðeins í boði stórum blockbústörfum. Þrír lykilþættir verða að fylgjast náið með. Fyrst mun tilkynntur tilraunaprogrammi OpenAI með Disney-eignaðum kvikmyndastöðvum, sem á að hefjast í þriðja fjórðungi, prófa Sora í fyrstu hugmyndavinnu og gæti sett staðla fyrir víðari innleiðingu. Í öðru lagi gætu viðvarandi samningaviðræður milli Writers Guild og stórra kvikmyndafyrirtækja innifalið ákvæði um notkun gervigreindar sem skilgreina viðurkenningu, greiðslu og gagnaréttindi. Í þriðja lagi eru stjórnvöld í ESB og United States að setja fram leiðbeiningar um AI‑framleidd efni, og lagalegar niðurstöður um tilvísanir gætu endurskapað hvernig „mannlegur sköpunarma
50

Þriðja beta macOS Tahoe 26.5 nú til þróunaraðila

Mastodon +6 heimildir mastodon
apple
Apple hefur gefið út þriðju beta‑útgáfu macOS Tahoe 26.5, aðeins viku eftir að önnur útgáfa kom í prófunarhópinn. Uppfærslan er hægt að sækja í Kerfisstillingar → Almennt → Hugbúnaðaruppfærsla, að því skilyrði að beta‑uppfærslur séu virkjaðar og tengdur ókeypis Apple‑þróunaraðilanotandi. Tahoe, 22. stórt stýrikerfi Apple og arftaki macOS Sequoia, kom í ljós á WWDC 2025. 26.5 útgáfan er fyrsta til að innifela nýjustu AI‑miðaða verkfærakistu sem Apple kynnti í 26.2‑uppfærslunni – nákvæmari Core ML‑samþættingu, á‑tæki API‑a fyrir spurningarstýringu og endurbætt persónuverndar‑sandkassi fyrir generative‑AI forrit. Fyrstu prófari skrá í sér verulegan hraðabætandi í Vision Pro‑stíls myndvinnslu og nýjan “Samstillt AI‑stillingar” flipa sem gerir notendum kleift að slökkva eða kveikja á aðgangi að módelum í hverju forriti. Fyrir þróunaraðila fylgir beta‑útgáfunni Xcode 15.3, sem bætir stuðning við væntanlega M4‑silíku og nýja Swift AI‑bókasafn sem einfalda hleðslu módelanna og útreikning á táknbúnaði. Tímasetningin er mikilvæg þar sem Apple hyggst nota macOS Tahoe sem sjálfgefna vettvang fyrir staðbundna AI‑vinnslu, eins og fjallað var í nýlegum greinum okkar um Retrieval‑Augmented Generation gegn Lucene í fyrirtækja‑stuðningskerfum. Með því að bjóða upp á lágt seinkunartíma á tækinu sjálfu í útreikningum vonast Apple til að halda gagnamiðaðri AI‑vinnslu innan Mac‑vistkerfisins, forðast ský‑miðaða lausnir og styrki persónuverndar‑stefnu sína. Áframhaldandi athuganir: Apple hefur bent á almenna beta í byrjun maí, með fullri útgáfu fyrir hátíðina. Þróunaraðilar ættu að fylgjast með samhæfingar‑skýrslum fyrir þriðju aðila AI‑ramma eins og PyTorch Mobile og væntanlegum Core ML 9‑uppfærslum. Jafnframt verður mikilvægt að sjá hversu fljótt helstu IDE‑forrit – sérstaklega GitHub Copilot og Claude‑studdar aðstoðartól – taka upp nýju AI‑API‑in, þar sem það gæti mótað næstu bylgju af Mac‑miðaðum framleiðni‑verkfærum.
44

Apple Sports App Receives Two New Features Across iPhone and CarPlay

Mastodon +6 heimildir mastodon
apple
Apple has rolled out version 3.10 of its Sports app, adding live‑weather data for Formula 1 Grand Prix events and a new, compact widget that works on both iPhone home screens and CarPlay dashboards. The update, released alongside iOS 18.4, lets users enable a dedicated “Sports mode” in CarPlay settings, placing live scores beside navigation and music without leaving the road. The F1 weather feed pulls real‑time temperature, wind and precipitation forecasts for each circuit, while the smaller widget displays up‑to‑the‑minute scores for a range of leagues, including the upcoming World Cup. The move matters because Apple’s sports offering has long lagged behind third‑party rivals that already provide in‑car scoreboards and race‑specific data. By integrating directly into CarPlay, Apple is tightening the feedback loop between its mobile ecosystem and the vehicle cockpit, a step that could pay dividends as the company eyes broader automotive ambitions. For F1 enthusiasts, the weather overlay offers a practical edge, turning the app into a mini‑pit‑board that can influence strategy discussions even for casual fans. The widget’s reduced footprint also aligns with Apple’s recent push for more flexible home‑screen designs, catering to users who want glanceable information without clutter. What to watch next is whether Apple expands the CarPlay sports experience beyond scores and weather. Analysts expect deeper data feeds—such as live timing, driver telemetry or betting odds—to appear in future releases, and a possible overhaul of the Sports app’s UI to match the new “Sports mode” aesthetic. Adoption metrics will be key: if drivers embrace the feature, Apple could leverage it as a selling point for iOS 18.4‑compatible vehicles and for any future Apple‑branded car projects. Keep an eye on upcoming iOS patches and the next major sports season, when the app’s relevance will be tested in real‑time.
44

Johny Srouji Taking Over as Apple's Chief Hardware Officer as John Ternus Transitions to CEO

Mastodon +6 heimildir mastodon
applechips
Apple has appointed senior vice‑president Johny Srouji as its new Chief Hardware Officer, a move that coincides with John Ternus’s elevation to chief executive officer. The change was confirmed by a brief statement from Apple’s leadership team and reported by MacRumors on 20 April. Srouji, who has overseen the Apple Silicon program since its inception and shepherded the M‑series chips that now power iPhones, Macs and iPads, will now sit atop all hardware divisions, from the iPhone and Mac to emerging platforms such as CarPlay Ultra and Apple’s augmented‑reality headset. The promotion matters because Apple’s hardware roadmap is the engine of its competitive edge in a market where custom silicon and on‑device AI are becoming decisive differentiators. Srouji’s deep expertise in chip design and his recent focus on AI accelerators suggest the company will double down on integrating advanced machine‑learning capabilities across its product line. The appointment also provides continuity after a week of high‑profile reshuffling: Tim Cook moved to executive chairman and Ternus took the helm as CEO, as we reported on 21 April. By placing the silicon veteran in charge of all hardware, Apple signals that it intends to keep its in‑house chip strategy intact despite growing pressure from rivals and supply‑chain volatility. What to watch next is how quickly Srouji’s expanded remit translates into tangible product updates. Analysts will be looking for announcements on the next generation of M‑series chips, a possible AI‑focused “Neural Engine” upgrade for iPhone, and the rollout of CarPlay Ultra features that could leverage new on‑device processing power. Further executive moves—particularly within Apple’s AI research teams—could also indicate whether the company is preparing a broader push into generative‑AI services. The next Apple hardware event, slated for the fall, will be the first real test of Srouji’s influence on the company’s silicon‑driven future.
42

Soul Player C64 – A real transformer running on a 1 MHz Commodore 64

HN +5 heimildir hn
A small team of hobbyist developers has pushed the limits of retro hardware by getting a genuine transformer‑style language model to run on a 1 MHz Commodore 64. The project, dubbed **Soul Player C64**, ships as a 25 k‑parameter transformer that can be loaded onto a .d64 disk image and executed either in the VICE emulator or on a real C64 equipped with a 1541 floppy drive. The code relies on aggressive quantisation, 8‑bit integer arithmetic and hand‑optimised 6502 assembly loops to squeeze inference into the machine’s meagre 64 KB of RAM and 1 MHz clock speed. Why it matters goes beyond novelty. As we reported on 20 April in “The Trouble with Transformers”, the energy and compute appetite of modern LLMs is a growing concern. Soul Player C64 shows that, with extreme model pruning and hardware‑aware design, useful neural inference can be squeezed onto devices that consume a fraction of the power of today’s GPUs. It also validates the claim from the same day’s “Local LLMs are actually good now” blog that small, locally‑run models can be practical, opening a pathway for ultra‑low‑power AI in embedded or off‑grid scenarios. The demonstration is a proof‑of‑concept rather than a production‑ready tool, but it raises several questions for the community. Will the approach scale to larger vocabularies or multimodal tasks, or is 25 k the practical ceiling for a 6502‑class CPU? Can similar tricks be applied to other vintage platforms, turning them into educational sandboxes for AI fundamentals? The developers plan to publish a benchmark suite comparing inference latency on the C64, a modern laptop and a low‑end ARM board, and they have opened the source repository for contributors to experiment with pruning strategies and custom kernels. The next few weeks should reveal whether this retro‑AI stunt sparks a broader movement toward “micro‑transformers” on ultra‑constrained hardware.
41

Business Insider

Mastodon +6 heimildir mastodon
apple
Samsung’s next flagship, the Galaxy S26 Ultra, has sparked a quiet debate among early‑leakers, while Business Insider’s latest hands‑on review has put the iPhone 13’s “Ceramic Shield” under the microscope. The outlet’s test‑run shows the iPhone’s glass can shrug off everyday scratches that would mar most smartphones, yet a sharp key or a sand‑laden pocket still leaves a mark. The report, published on Business Insider’s tech guide, concludes the screen is “surprisingly scratch‑resistant, but not invincible.” The Galaxy S26 Ultra rumor, circulating on Japanese tech forums, suggests the device may ship without a built‑in protective layer, prompting some to advise “keeping it hidden from the captain” – a tongue‑in‑cheek warning that the phone could benefit from a screen protector despite Samsung’s usual emphasis on durability. If true, the contrast with Apple’s reinforced glass could shift consumer expectations in the premium segment, where many users now forgo protectors to preserve a pristine look. Why it matters is twofold. First, durability directly influences purchase decisions in a market where flagship prices hover above €1,200. A proven scratch‑resistant surface can justify a premium, while perceived fragility may drive buyers toward competitors or add accessory spend. Second, the narrative feeds a broader industry trend: manufacturers are betting on advanced glass technologies—Apple’s ceramic‑infused sapphire blend and Samsung’s rumored “Ultra‑Shield” polymer—to differentiate products without inflating thickness. What to watch next includes Samsung’s official launch details, which should confirm whether a protective coating will be standard or optional. Independent drop‑and‑scratch tests from consumer labs will likely follow, offering a side‑by‑side comparison with Apple’s claims. Finally, the accessory market will gauge demand for third‑party protectors, especially if the S26 Ultra’s screen proves less resilient than its predecessor. The coming weeks could reshape how durability is marketed and priced across the flagship arena.

Allar dagsetningar