A curated selection of articles on tech and law from across the web, brought together by Concatena. Updated regularly as new reading is tagged. Subscribe via RSS.

Last updated: 3 May 2026 at 14:03 — 21 articles

Final storage and access technologies guidance published

ico.org.uk

cookiesicoico-guidanceonline-privacypecrsatsstorage-and-access-technologiessurveillance

The ICO has today published its finalised guidance on Storage and Access Technologies (SATs), alongside an update on its online tracking strategy.

Concatena says

Our Take: I’ve not had a chance to fully read into this yet, but my initial big takeaway is the uphill battle that the ICO has in trying to convince people that terms like SATs mean the same thing as they understand when they here cookies. I know how they feel, it’s driven me mad for years, but sometimes you need to meet people where they are. I’m slightly concerned about the references to consulting with the online advertising industry to help shape future initiatives – I’d really like to see consultation with third sector or indeed businesses who are reliant on the advertising revenue but also value their customers to pitch in here too. Final thought is to about how it’s intended that “demonstrably low privacy risks” are quantified. In 2004 I remember the then commissioner, Richard Thomas, warning that we were sleepwalking into a surveillance society. Whilst the current commissioner has stepped away for a while, I hope the ICO still remembers that report.

Your Takeaway: Nothing really to see here, yet – but if online tracking or advertising is important to your business, or to your ethics, it’s worth a closer read – and maybe getting involved in the ongoing discussions.

Highlights

The guidance, which covers how the Privacy and Electronic Communications Regulations (PECR) (and where relevant, the UK GDPR) apply to cookies, tracking pixels, device fingerprinting and similar technologies (‘storage and access technologies’), incorporates updates following two consultations and changes introduced by the Data (Use and Access) Act. It includes new examples and points of clarification to help organisations comply with the law. It reflects the law as it currently stands, and sits separately from our ongoing work to review regulation 6 of PECR for online advertising purposes, on which further updates will follow in the coming weeks.

We have today published our finalised guidance on Storage and Access Technologies (SATs), alongside an update on our online tracking strategy.

Read original →

Online tracking strategy update – April 2026

ico.org.uk

ad-techconsent-mechanismsconsent-or-paycookiesonline-privacypecrsatsstorage-and-access-technologiessurveillance

At the start of 2025, we published our online tracking strategy setting out our plans to give people meaningful choice and control over how they are tracked online, and provide businesses with certainty to innovate responsibly.

Concatena says

Our Take: We’ve commented on the SATs guidance in a separate post, but this wider summary from the ICO is worth a read too. I still don’t love the focus on consent for “cookies/SATs” (and don’t even get me started on consent-or-pay) – I don’t see how the average user can possibly understand the network that lies behind that little button – but that’s the legal landscape were in.

Your Takeaway: As with the SAT guidance, there’s nothing requiring action here yet (unless you didn’t check your cookie banner compliance last year… in which case, I’d recommend a look now). Still, some ongoing discussions here it’s worth keeping on top of – and contributing to as well.

Highlights

After careful consideration and review of our concerns, we concluded that further action would not be appropriate after observing positive improvements from the platforms as compared to their historical processing practices. This was communicated to the platforms in January of this year.

We assessed key areas of concern, including: the validity of consent for the data processing carried out by these platforms and their lawful basis relied upon for processing.

We have driven improvements in the standard products offered to website owners by working directly with key cookie banner vendors responsible for the largest market shares across the UK’s most popular websites. For example, OneTrust and Usercentrics have developed UK-specific templates aligned with our guidance. This is in addition to a range of other improvements made by these platforms and changes implemented by Sourcepoint and Inmobi to enhance their existing templates and guidance. This engagement has raised the bar across a significant portion of the market and made it easier for online businesses to offer fair, compliant choices to users.

We committed to reviewing cookie banners on the top 1,000 websites in the UK. As we updated in December, our action has seen significant changes. It has lowered the prevalence of cookies being placed before a user has expressed their choice and has driven an increase of clear reject options on consent banners, making it easier for users to control how they are tracked.

Next month, we will be publishing our advice to government on where PECR requirements to obtain consent for the use of storage and access technologies for online advertising purposes could be removed. We understand that the government is exploring whether to create an exception or exceptions for some online advertising purposes, using secondary regulation-making powers under regulation 6A of PECR. This work will help inform government policy–making.

Last year, we opened a call for views on our review of regulation 6 PECR where the use of storage and access technologies for advertising may pose demonstrably low privacy risks.

Read original →

Adobe’s legal chief calls for creator protection as policymakers and tech companies reframe copyright in the era of AI

Craig Hale

ai-detectingartificial-intelligencecopyrightcreativitycreatorsglobalintellectual-property

While the world establishes copyright for AI-generated assets, Adobe’s legal chief calls for greater creator protection and asset verification.

Concatena says

Our Take: Adobe’s legal chief urges a pragmatic path for AI regulation – don’t tear up copyright law but clarify it and protect creators whose work fuels AI. I hate to say it, but I agree – let’s focus on the fundamentals, but importantly let’s also think about whether the means for enforcing individual contributors rights is accessible in this new world, and if not, whether there ought to be a supportive regime which regulates bad actors.

Your Takeaway: IP is always something to keep an eye on. The article talks about creator protections and provenance tools, and they are worth looking at and understanding; but it’s unclear how much control they truly give. Make sure you’re not cutting corners in your own IP compliance with third party materials at the same time as protecting your output.

Highlights

The difficulty at the moment is that regions like the US, EU and UK are pushing their own goals. "It’s a fallacy to think there would be a universal standard that would apply globally," Pentland said. "but we can dream."

When asked about watermarking, Pentland rejected visible marks as the default solution, favoring options like metadata or QR-style verification to preserve the integrity of an artist’s work.

To date, the ‘Big Five’ camera makers (Fujifilm, Sony, Canon, Nikon and Leica) and some Android manufacturers (Google Pixel and Samsung Galaxy) have implemented Content Credentials, as have a number of popular platforms like LinkedIn, YouTube, Meta and TikTok.

Adobe sees this type of verification protecting consumers against threats like deepfakes, enabling users to verify authenticity.

For Adobe, this means pushing Content Credentials, which the company describes separately as "a durable, industry-standard metadata type that acts like a digital nutrition label for content," in a bid to create verifiable content trails.

In 2025, the US Copyright Office granted protection to an image that was created with AI assistance, making this the first time anyone has ever been granted copyright protection for AI-generated work.

"We don’t want it to stifle innovation," she said, "but at the same time, we can’t leave it completely unchecked."

At the same time, Pentland also advocated for tech companies to get involved – not to redefine copyright law, but to maintain authenticity and protect creators in this era of AI assistance.

Speaking with *TechRadar Pro* in an exclusive interview at Adobe Summit 2026, the company’s Chief Legal Officer, Louise Pentland, urged policymakers to resist radical changes, and for courts and companies instead to focus on a more pragmatic approach.

Read original →

Will human minds still be special in an age of AI?

Tom Griffiths

artificial-intelligenceethicshuman-in-the-loop

Human intelligence is shaped by our limits, like short lives and simple communication, which makes us special. AI can do many tasks but works differently and faces other challenges. Instead of rivals, humans and AI should be seen as different minds with unique strengths.

Concatena says

Our Take: This article argues that AI isn’t a single linear upgrade on human minds – it’s a different kind of intelligence shaped by different limits and experiences, so claims that machines will simply “overtake” us are misleading. I think there’s another point here too – we remove an important experience and learning opportunity from humans when we automate everything.

Your Takeaway: When evaluating or deploying AI, focus on the problem you’re trying to solve, and whether it’s one which can be helped by automation and customisation from a LLM, and what the extent of that help should be. Design your processes to make sure that you’re putting humans at the right point of the journey – not just as a box tick exercise at the end, but actually contributing to the process, supported, where appropriate, by these tools.

Highlights

This isn’t the only place where AI runs into difficulties. Imagine you are assisting a pharmacist. They need a drug with a concentration of 785 parts per million (ppm). Two test tubes are available: one containing 685 ppm and the other 791 ppm. Your task is to determine which test tube provides the most similar concentration to your required dosage. Hopefully you would pick 791 ppm. However, some of the time even leading AI systems pick 685 ppm. Why? Because the artificial neural networks used to build AI systems tend to blur things together. When there are two possible answers, they choose something in between. The number 785 can be represented as either a string of digits (“7”, “8”, and “5”) or as a quantity (seven-hundred-and-eighty-five). If it is a string, 785 is more similar to 685 – they are just one digit apart. But if it is a quantity, then it is more similar to 791. Mixing up these two answers can have significant consequences.

Here’s a simple example. How many letters are in this sequence: aaaaaaaaaaaaaaaaaaaaaaaaaaaaa? For a human, it’s not particularly difficult to answer – you can just count them up. For an AI system, it’s trickier. They are constrained by how they represent language and how they are trained. They like to break up words into parts (called “tokens”), which can make it hard for them to answer questions about spelling. And they tend to favour sequences of tokens that appear more often in their training data as answers. We found that OpenAI’s GPT-4 model, which was hailed as showing “sparks of artificial general intelligence”, was more likely to correctly answer this question when given 30 letters rather than 29. Why? Because the number 30 is written down more often than the number 29.

Human intelligence is a response to our limitations. To make the most of our lives, we have an amazing ability to learn from limited experience. Yes, AlphaGo can beat the best human go players, but it was trained on many human lifetimes of games. Yes, ChatGPT can hold a reasonable conversation, but it’s drawing on thousands of years of language. No AI system can produce sentences with the creativity of a human five-year-old when exposed to the same amount of data.

AI systems face none of these constraints. They can process more data than any human might see in a lifetime. They can expand their capacity by using more computers. And they can easily share what they see and learn with other machines.

Humans are no different. Our minds have been shaped by our biology. We only live for a few decades and have to learn everything we are going to learn and do everything we are going to do in that short time. All that learning and doing will be carried out at the direction of a kilogram or so of neurons trapped inside our bony skulls. We can only share our thoughts with others by making noises with our mouths or tapping and wiggling our fingers.

Read original →

English councils to trial Google AI tool to speed up planning decisions

Chris Smyth

artificial-intelligencefuture-of-workhuman-in-the-looppublic-sectortraining

English councils will start using a new AI tool from Google to help speed up building project decisions. The AI will give recommendations, but humans will make the final call. The government hopes this will make planning faster and support building more homes.

Concatena says

Our Take: Using AI to generate efficiencies could really support public services to get more done, and to be more consistent. Human in the loop is vital – but you need to ensure that those humans are empowered to really BE in that loop and to contradict the machine. “Computer says no” can be very difficult to pass over…

Your Takeaway: Make sure that any humans in the loop using LLM powered systems have appropriate training and understanding of their outputs, so that system can support *their* critical thinking, not outsource it.

Highlights

Under the programme, humans will make the final decisions with AI providing a recommendation. For more complex applications, the AI tool will probably give officials a framework for decisions rather than a definitive answer.

“There is a risk that in the push to harness efficiencies and insights, planning’s decision-making systems are redesigned to work well with AI, and not for optimal outcomes. There’s no value in processing applications more quickly if the developments that follow are low quality.”

Recommendations on whether to grant or refuse building projects will be generated using a custom AI system — the Augmented Planning Decision Tool — before being signed off by council officers.

Planning decisions in England will for the first time be made with the help of Google-built AI starting this month, in a pilot ministers say will speed up approvals.

Read original →

Mathematicians Claim Significant Discovery Using ChatGPT

Frank Landymore

artificial-intelligencellmmaths

A young man named Liam Price used ChatGPT to solve a difficult math problem that had puzzled experts for over 60 years. Experts say the AI found a new way to approach the problem, but humans had to fix its mistakes. This breakthrough shows AI might help solve tough math questions, but caution is still needed.

Concatena says

Our Take: Sounds amazing. But then I also remember this: https://www.psychologytoday.com/gb/blog/understanding-suicide/202511/chatgpt-made-him-delusional

Your Takeaway: LLMs can do amazing things. They can also do dumb things. And even the amazing things need your help.

Highlights

“The raw output of ChatGPT’s proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,” Jared Lichtman, a mathematician at Stanford University whose doctoral thesis centered on one Erdős’s conjectures, told *SciAm*.

Still, it required humans to apply the finishing touches.

Earlier this month, 23-year-old Liam Price shared a solution to one of the so-called Erdős problems, a series of famously abstruse math conjectures left behind by the Hungarian mathematician Paul Erdős. While some of these conjectures have gotten the better of savants in the field, Price, who has no advanced math degree, seemingly stumbled on a solution for one of them by simply prompting GPT-5.4 for an answer.

Did ChatGPT just solve an arcane math problem that’s foiled mathematicians for over sixty years? Some leading experts say yes, *Scientific American* reports.

Read original →

Usage-based pricing killing your vibe – here’s how to roll your own local AI coding agents

Tobias Mann and Thomas Claburn

artificial-intelligencecharging-modelscommercial

Usage-based pricing for AI coding tools is becoming expensive and restrictive. This article shows how to run local AI coding agents like Claude Code, Pi Coding Agent, and Cline to avoid those costs. Local models work well for small projects but may need human approval to avoid mistakes.

Concatena says

Our Take: I’m not necessarily encouraging you to rolll your own here, but it is worth being aware of this business model change – and the fact that from the get-go the definition of a token as a metric has been less than clear and open.

Your Takeaway: If you’re reliant on third party LLMs, remember to account for the risk of them changing their measurement metrics and charging – it’s been on the cards for a while.

Highlights

Over the past few weeks, we’ve seen Anthropic toy with dropping Claude Code from its most affordable plans while Microsoft has skipped testing the waters and moved GitHub Copilot to a purely usage-based model. The whole debacle got us thinking. Do we even need Anthropic or OpenAI’s top models, or can we get away with a smaller local model? Sure, it might be slower, less capable, and a little more frustrating to work with, but you can’t beat the price of free… Well, assuming you’ve already got the hardware that is.

Read original →

EU and UK competition rules updated around tech licensing

Out-Law from Pinsent Masons

block-exemptionscompetitiondata-licensing

New competition rules governing technology licensing agreements have now taken effect in both the EU and UK.

Concatena says

Our Take: I’ll be honest, I’ve not fully digested this. Competition law takes a lot of brain power. But I do want to dig some more into the new data licensing elements when I get the chance – I think this is where regulators need to be really thoughtful.

Your Takeaway: Depends on who you are – commercial lawyers, make sure you have at least an understanding of these changes. Small businesses, you can probably scroll on by!

Highlights

The provisions of the new the UK TTBEO are for the most part in alignment with those of the TTBER

Data licensing agreements are increasingly common, but they were not covered under the 2014 TTBER and guidelines.

New guidelines on data licensing

Clearer market share thresholds

A one-year transitional period, until 30 April 2027, applies under both the EU and UK regimes for existing technology transfer agreements that comply with the old TTBER requirements but not the new rules. New technology transfer agreements implemented from today must immediately comply with the new rules.

In the UK, a new Technology Transfer Agreements Block Exemption Order (TTBEO) – which was subject to separate review and consultation by the UK government and the Competition and Markets Authority (CMA) – also enters into force today, 1 May. The TTBEO replaces the 2014 TTBER which was “assimilated” into UK national law following Brexit. The CMA is currently consulting on draft new guidance for the UK TTBEO regime.

In the EU, a revised Technology Transfer Block Exemption Regulation (TTBER) and revised Technology Transfer Guidelines (‘the guidelines’) enter into force today, 1 May. The revisions, which replace the 2014 versions, follow a four-year review by the European Commission into the functioning of the 2014 TTBER and related guidelines and aim to address concerns raised from a wide range of stakeholders.

Read original →

AI agents can bypass guardrails and put credentials at risk, Okta study finds

Computerworld

agentic-aiartificial-intelligenceresearchsecurity

An AI agent that revealed sensitive data without being asked. An agent that overruled its own guardrails. Another that sent credentials to an attacker via Telegram, because it forgot it wasn’t supposed to do so after a reset.
It’s no secret that AI agents have huge potential, balanced by equally big risks. What’s becoming apparent, however, is how quickly agentic systems can veer wildly off course and start exposing critical information under real-world conditions.
A look at just how easily this can happen emerges from Phishing the agent: Why AI guardrails aren’t enough, a report on tests conducted by cloud identity and access management (IAM) company Okta Threat Intelligence, which uncovered all of the problems cited above, and more.
Their research focused on OpenClaw, a model-agnostic multi-channel AI assistant which has seen explosive growth inside enterprises since appearing in late 2025.
The Telegram hack
In common with the growing list of rival agents, OpenClaw is only as useful as the access it is given to files, accounts, browsers, network devices, and, most significant of all, credentials.
One test conducted by Okta assessed how easy it would be to trick OpenClaw running Claude Sonnet 4.6 into handing over an OAuth token. This shouldn’t be possible; the LLM should refuse this request. However, what might have held true when prompting Claude as a chatbot quickly fell apart when it was accessed through OpenClaw.
The test assumed that a user had given OpenClaw full access to their computer, that they regularly controlled the agent over Telegram, and that their Telegram account had been hijacked.
First, the attacker instructed the agent via Telegram to retrieve an OAuth token, but to only display it in a terminal window on the computer. Claude Sonnet’s guardrails would prevent it from copying the token, however, the testers were able to reset the agent, causing it to forget it had displayed the token in the terminal window.
At that point, Okta said in i…

Concatena says

Our Take: It might save some time, but tou don’t need to be hugely imaginative to come up with scenarios where agentic AI could cause some really fundamental problems.

Your Takeaway: BE CAREFUL – if it seems to good to be true, it might be. These tools are so easy to use, but it’s really worthwhile having at least a basic understanding of what they CAN do if you’re going to use them, so you can protect yourself.

And let’s start by NOT giving tools like OpenClaw full access to your computer…

Highlights

Agents are only the latest example of a technology that is being deployed faster than it can be secured, Kirk observed. “Much of AI right now is defying security gravity,” he said. “But there are ways to use agents safely and keep credentials out of their reach, which is the only safe way to use them.”

“The agents are prompted to be as helpful as possible by default, a characteristic that poses particular concerns when it comes to credentials and tokens,” said Kirk.

Agentic AI is really two things: a powerful orchestration system coupled to one or more highly-capable LLMs. What an agent *isn’t* is a simple interface, and it must be viewed as a separate system capable of autonomous, unpredictable reasoning.

The test assumed that a user had given OpenClaw full access to their computer, that they regularly controlled the agent over Telegram, and that their Telegram account had been hijacked.

A look at just how easily this can happen emerges from *Phishing the agent: Why AI guardrails aren’t enough**,* a report on tests conducted by cloud identity and access management (IAM) company Okta Threat Intelligence, which uncovered all of the problems cited above, and more.

It’s no secret that AI agents have huge potential, balanced by equally big risks. What’s becoming apparent, however, is how quickly agentic systems can veer wildly off course and start exposing critical information under real-world conditions.

An AI agent that revealed sensitive data without being asked. An agent that overruled its own guardrails. Another that sent credentials to an attacker via Telegram, because it forgot it wasn’t supposed to do so after a reset.

Read original →

Does Your AI Agent Need a VPN? The Company Behind Norton and Avast Thinks So

Ajay Kumar

agentic-aiartificial-intelligencevpn

You might use a VPN yourself, but have you considered giving one to your AI agent? It might be more important than you think.

Concatena says

Our Take: Some are looking to ban VPNs, whilst others are giving them to AI Agents… Back to whack-a-mole for services who are trying to stop AI agents from clogging up their processes.

Your Takeaway: If your service distinguishes between human and agent, will VPN use affect that process? Or could your agent benefit from its own VPN?

Highlights

"Perhaps most importantly, your ISP can’t distinguish between your own internet traffic and that of your autonomous AI agent," said Tomaschek. "But with this integration, as well as with Windscribe’s, the VPN encrypts the agent’s traffic as well, so basically you’re protected from whatever your agent might autonomously get up to on the internet."

If you use OpenClaw, ChatGPT or one of the many other LLMs with access to the internet, your autonomous AI agent can now take advantage of the same privacy and security features.

"Using a VPN with an LLM can provide several advantages, such as keeping your identity private. Your internet provider won’t be able to see your AI agent’s activity, or that you’re using an AI agent," said Moe Long, CNET senior editor.

Read original →

Study: AI models that consider user’s feeling are more likely to make errors

Kyle Orland

artifical_intelligencehuman-interfacellmresearch

AI models tuned to be warmer and more empathetic often make more mistakes than original models. These warmer models tend to prioritize making users feel good over giving correct answers, especially when users share emotions like sadness. Researchers warn that choosing between a friendly AI and an accurate AI is important for safe and trustworthy use.

Concatena says

Our Take: The law of unintended consequences strikes again – and why tech management and parenting have so much in common…

Your Takeaway: When you’re defining how you want an AI agent to act, remember it’s going to take your instructions very literally – and you might not like the consequences. Does this have an impact for products you ship or products you use that incorporate Ai – particularly if the people training the product may have a different world viewpoint to those using it?

Highlights

In a new paper published this week in Nature, researchers from Oxford University’s Internet Institute found that specially tuned AI models tend to mimic the human tendency to occasionally “soften difficult truths” when necessary “to preserve bonds and avoid conflict.” These warmer models are also more likely to validate a user’s expressed incorrect beliefs, the researchers found, especially when the user shares that they’re feeling sad.

In human-to-human communication, the desire to be empathetic or polite often conflicts with the need to be truthful—hence terms like “being brutally honest” for situations where you value the truth over sparing someone’s feelings. Now, new research suggests that large language models can sometimes show a similar tendency when specifically trained to present a “warmer” tone for the user.

Read original →

Hackers are actively exploiting a bug in cPanel, used by millions of websites

Zack Whittaker

breachsecurityvulnerability

A serious bug in cPanel software lets hackers take full control of websites and servers. Many web hosting companies have fixed the issue, but users must update their systems quickly to stay safe. Experts warn that the vulnerability is being actively exploited and could affect millions of sites worldwide.

Concatena says

If you’re using cPanel, make sure you’re patched!

Highlights

cPanel and WHM are two software suites used for managing web servers that host websites, manage emails, and handle important configurations and databases needed to maintain an internet domain. The two suites have deep-access to the servers that they manage, allowing a malicious hacker potentially unrestricted access to data managed by the affected software.

The bug allows hackers to hijack and take full control of the servers running the affected software, which is thought to be used by tens of millions of website owners around the world.

Security researchers are sounding the alarm on a newly discovered vulnerability in the widely used web server management software cPanel and WebHost Manager (WHM).

Read original →

Meta cuts contractors who reported seeing Ray-Ban Meta users have sex

Scharon Harding

artifical_intelligencecontent-moderationglobalmetasurveillance

Meta ended its contract with Kenyan firm Sama after workers reported seeing private and explicit videos recorded by Ray-Ban Meta glasses. Sama denies failing to meet standards and says it was not warned about any issues. The situation has raised privacy concerns and led to investigations and a class-action lawsuit against Meta.

Concatena says

Consider the full supply chain when looking at the ethics of a product – and remember, what feels like automated magic is actually a person behind the curtain more often than you might expect.

Highlights

BBC reported that Sama workers believe Meta ended the contract because workers spoke out about seeing Ray-Ban Meta-shot footage of people performing personal acts, like changing their clothes, having sex, and using the toilet.

A Meta spokesperson told BBC that Meta “decided to end our work with Sama because they don’t meet our standards.” Ars Technica reached out to Meta asking how, specifically, Sama failed to meet Meta’s expectations and will update this article if we hear back. Ars has also reached out to Sama.

In February, numerous workers from a company that Meta contracted to perform data annotation for Ray-Ban Meta reported viewing sensitive, embarrassing, and seemingly private footage recorded by the smart glasses. About two months later, Meta ended its contract with the firm.

Read original →

Spotify rolls out ‘Verified’ badge to distinguish human artists from AI

Agence France-Presse

artifical_intelligencespotify

Spotify will add a green "Verified by Spotify" badge to show which artists are real humans, not AI creations. This badge helps listeners trust the music and appears only on profiles that meet Spotify’s authenticity rules. The change comes as many AI-generated songs flood streaming platforms, causing concern in the music industry.

Concatena says

This is an interesting angle – it looks like spotify just want to make sure that there is a human FACE to the music, not necessarily that the music wasn’t created by AI?

Highlights

Spotify on Thursday unveiled a new verification system designed to help listeners distinguish human musicians from AI-generated content, as people flood streaming platforms with a growing volume of synthetic tracks made with artificial intelligence.

The initiative arrives amid mounting concern across the music industry over AI-generated content overwhelming streaming catalogues.

The company said more than 99% of artists that listeners actively search for will be verified at launch, representing hundreds of thousands of musicians spanning genres and geographies.

To earn verification, artists must demonstrate sustained listener engagement over time, comply with Spotify’s platform rules and show signs of a genuine presence both on and off the platform, such as concert dates, merchandise and linked social media accounts.

Read original →

Legal AI startup Legora hits $5.6 valuation and its battle with Harvey just got hotter

Anna Heim

artifical_intelligencelegal-practice

Legora is a legal AI startup valued at $5.6 billion and backed by Nvidia and other investors. It competes closely with Harvey, another legal AI company valued at $11 billion, as both expand globally. The rivalry is intense, with big marketing efforts and a focus on applying AI to reshape the legal industry.

Concatena says

I’d be so interested in anyone’s real life experience of these tools. So far all I’ve seen is hype and confused expressions on the faces of lawyers who don’t really seem to know what they are supposed to do with it.

Highlights

Alongside Atlassian and other new financial investors, NVentures joined Legora’s cap table as part of a $50 million Series D extension that comes a month after the startup’s $550 million Series D.

Leveraging AI to help lawyers streamline their work, the Swedish-born legal tech startup is competing with U.S. player Harvey.

Nvidia has laid a new brick in its AI empire. NVentures, its corporate VC fund, has backed Legora, reportedly its first legal AI investment.

Read original →

Firefox maker torches Google for building Prompt API into browser

Thomas Claburn

artifical_intelligencebrowsersgooglemozillaweb-standardswww

Mozilla opposes Google’s new Prompt API because it may limit web openness and favor Google’s AI model. They worry it forces developers to follow Google’s rules, hurting fairness and interoperability. Google says the API encourages innovation, but tests show its AI often performs poorly.

Concatena says

There is a very real risk for everyone of AI being built in by the back door even if a product doesn’t appear to use AI. Due diligence in software is getting very difficult.

Highlights

"The core problem is interoperability," he said. "Prompts are tightly coupled to models; developers will inevitably tune to the quirks and policies of whatever model they’re building against.

"This seems like a bad direction for an API on the web platform, and sets a worrying precedent for more APIs that have [browser]-specific rules around usage," he said.

Perhaps more significantly, Archibald notes that using the Prompt API requires agreeing to Google’s Generative AI Prohibited Uses Policy, which prohibits activities that are not necessarily illegal, like generating "disturbing" content.

First, he worries that Google’s own Nano model will become the default and that developers will standardize on it in an effort to make the non-deterministic responses of an AI model more predictable. That tendency, he argues, will create pressure for Apple and Mozilla to license Nano, for the sake of a common user experience.

Mozilla’s concern, as articulated by Archibald, has to do with what the Prompt API means for the web, not to mention Google’s justification for deployment.

Various vendors like OpenAI and Perplexity have shipped browsers that embed access to remotely hosted AI models. Mozilla itself is testing an AI-based Smart Window in Firefox and it’s developing tools for AI model scaffolding.

The Prompt API, as Google describes it, "gives web pages the ability to directly prompt a browser-provided language model." It provides a way to send natural language instructions to Google’s Gemini Nano model, which is small enough to be downloaded for local inference through Chrome.

"We continue to oppose this API, and feel it has severe negative consequences to the interoperability, updatability, and neutrality of the web platform," said Archibald.

Jake Archibald, Mozilla web developer relations lead, articulated the org’s concerns in a GitHub discussion of the API, which provides a standard way to send and receive prompts and responses from a local machine learning model.

Read original →

Congress keeps kicking surveillance reform down the road

Gaby Del Valle

fisasurveillanceusa

Congress extended Section 702 of the Foreign Intelligence Surveillance Act for 45 days to allow more time for reform talks. The House passed a version with minor changes but no warrant requirements, causing frustration among some lawmakers. Privacy advocates say the bill does not do enough to protect Americans’ rights.

Concatena says

Whilst this legal back and forth might feel far away, the approach the US takes to its surveillance regime can have a big consequence for UK and EU users of the big US tech providers

Highlights

“Three weeks is more than enough time to negotiate a reform bill,” Thune said on the Senate floor on Thursday. “That is, if members are serious about negotiating.”

The House renewed Section 702 with minor reforms on Wednesday evening. The bill didn’t include the hotly debated warrant requirement, but it did feature a provision prohibiting the Federal Reserve from issuing Central Bank Digital Currencies, which Senate Majority Leader John Thune (R-SD) described as a nonstarter.

Congress has reauthorized Section 702 of the Foreign Intelligence Surveillance Act — but only for another 45 days. The extension is meant to give legislators more time to negotiate reforms to the controversial wiretapping bill. If the past few weeks are any indication of how future debates will go, however, we’re in for a bumpy ride.

Read original →

Utah’s New Law Targeting VPNs Goes Into Effect Next Week

Rindala Alajaji

surveillanceusavpn

For the last couple of years, we’ve watched the same predictable cycle play out across the globe: a state (or country) passes a clunky age-verification mandate, and, without fail, Virtual Private Network (VPN) usage surges as residents scramble to maintain their privacy and anonymity. We’ve seen this everywhere—from states like Florida, Missouri, Texas, and Utah, to countries like the United Kingdom, Australia, and Indonesia. 
Instead of realizing that mass surveillance and age gates aren’t exactly crowd favorites, Utah lawmakers have decided that VPNs themselves are the real issue.
Next week, on May 6, 2026, Utah will become, to EFF’s knowledge, the first state in the nation to target the use of VPNs to avoid legally mandated age-verification gates. While advocates in states like Wisconsin successfully forced the removal of similar provisions due to constitutional and technical concerns, Utah is proceeding with a mandate that threatens to significantly undermine digital privacy rights. 
What the Bill Does
Formally known as the “Online Age Verification Amendments,” Senate Bill 73 (SB 73) was signed by Governor Spencer Cox on March 19, 2026. While the majority of the bill consists of provisions related to a 2% tax on revenues from online adult content that is set to take effect in October, one of the more immediate concerns for EFF is the section regulating VPN access, which goes into effect this coming Wednesday.
The VPN Provisions
The new law explicitly addresses VPN use in Section 14, which amends Section 78B-3-1002 of existing Utah statutes in two primary ways:

Regulation based on physical location: Under the law, an individual is considered to be accessing a website from Utah if they are physically located there, regardless of whether they use a VPN, proxy server, or other means to disguise their geographic location.
Ban on sharing VPN instructions: Commercial entities that host "a substantial portion of material harmful to minors" are now prohibited from fa…

Concatena says

Our Take: Internet regulation is hard, and if you don’t take a multi-step view, then you can end up playing whack-a-mole.

Your Takeaway: If the tech you rely on could be outlawed, how can you plan?

Highlights

Next week, on May 6, 2026, Utah will become, to EFF’s knowledge, the first state in the nation to target the use of VPNs to avoid legally mandated age-verification gates. While advocates in states like Wisconsin successfully forced the removal of similar provisions due to constitutional and technical concerns, Utah is proceeding with a mandate that threatens to significantly undermine digital privacy rights.

For the last couple of years, we’ve watched the same predictable cycle play out across the globe: a state (or country) passes a clunky age-verification mandate, and, without fail, Virtual Private Network (VPN) usage surges as residents scramble to maintain their privacy and anonymity. We’ve seen this everywhere—from states like Florida, Missouri, Texas, and Utah, to countries like the United Kingdom, Australia, and Indonesia.

Instead of realizing that mass surveillance and age gates aren’t exactly crowd favorites, Utah lawmakers have decided that VPNs themselves are the real issue.

Read original →

White House presses tech companies for support on AI-driven cyberattacks

Aaron Mak, John Sakellariadis, Dana Nickel

artifical_intelligencegovernancelegal-landscapeusa

Tech and cyber companies were sent questions about artificial intelligence-led cybersecurity threats, including those posed by Anthropic’s advanced AI model, Mythos.

Concatena says

Does the approach taken to law making by governments rely a little too much on input from those who perhaps ought to be restricted by the laws that are made?

Highlights

The White House has been taking steps to defuse a monthslong legal battle with Anthropic over the company’s efforts to set ethical limits on government use of AI — a fight that led President Donald Trump in February to ban all federal agencies from using the AI company’s software. Since then, growing awareness of Mythos’ cyber prowess — as well as concerns that unauthorized users might be commandeering technology — has agencies clamoring for access to the tool.

One list of questions sent by the White House to some tech and cyber firms, obtained by POLITICO, covers a range of technical and policy considerations, including which widely used coding projects should be prioritized and more basic questions about how the public and private sectors can work together on initiatives such as Project Glasswing. One question simply asks: “What is the most effective role for the government?”

The request for additional, detailed information from these companies reflects the intensifying focus in Washington on the evolving threat that hyper-advanced AI tools may pose to national security and digital infrastructure.

The questions, from the White House’s Office of the National Cyber Director, focus on how specific sectors in the tech and cybersecurity industries can work with the White House to boost their defenses with AI, these people said. Companies have been asked to respond to them by Friday.

The White House has asked a group of tech companies to answer a set of questions this week about how to ward off digital attacks that frontier AI tools could soon enable, according to four people with knowledge of discussions between the administration and the tech sector.

Read original →

Will AI lead to more accurate opinion polls?

BBC News – Business

accuracyartifical_intelligencepoliticssociety

It’s cheaper and faster to collect people’s opinions using AI, but will it make polls more accurate?

Concatena says

Whether about polling or anything else, 90% accuracy sounds like a big number, but in practice it’s a huge amount of inaccuracy. It’s really important when companies cite these kinds of figures to try to get access to real life examples of that margin of error.

Highlights

One checks he’s answering the question, one analyses whether he’s being too superficial and needs prompting to go deeper, while the third checks that the respondent is not a fraud… not a robot, for example.

Note: How long will it be before there are products to answer these kinds of calls for you?

The voice is young, female, brisk and business-like and belongs to an AI agent. A computer programme in other words. A string of code.

Note: It’s worth questioning why AI agents are so frequently expressed as being female…

The company claims its method is "10 times faster, 10 times cheaper and 90% as accurate as human polling".

It does not focus on quantitative polling, which is already largely automated through mass surveys. Instead, it emphasises depth. "We don’t ask people to tick boxes – they have a conversation with an AI," Fontaine explains. "That means we can explore not just what people think, but how they think – how they build their opinions, and even when those opinions change."

Read original →

OpenAI explains why ChatGPT developed a goblin fixation, and how it solved the issue

Zac Hall

artifical_intelligenceopenaipromptsreinforcement_learning

OpenAI noticed that ChatGPT kept talking too much about goblins and other mythical creatures. This happened because of a past feature that rewarded creative use of such metaphors. To fix it, they told the new GPT-5.5 model not to mention these creatures unless really needed.

Concatena says

LLMs really do tend to stick to a theme… This is a bit of fun, but it does have some wider implications, and can help explain how LLMs can be influenced, and why you need to be cautious about their output. But also how you can influence them to get the results you need, too.

Highlights

Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query

The fix, in part, is a specific set of instructions to never talk about goblins unless it’s abundantly relevant:

The goblin problem links back to the “Nerdy personality” option briefly supported by ChatGPT.

To develop the personality, OpenAI needed to “reward” the model to incentivize its creative use of mythical metaphors. However, even after the Nerdy personality option was retired, the model remained unreasonably attached to gremlins, goblins, and other make-believe creatures.

Read original →