North Korean hackers are using GenAI to hold jobs in western firms
New research from Okta reveals AI written CVs and messages
This is an escalation from an existing fake interview campaign
New research from Okta has revealed that hackers from the Democratic People’s Republic of Korea (DPRK), are using generative AI in its malicious interview campaign - a series of tactics that involve gaining employment in remote technical roles in western firms, usually in industries with sensitive security data like defense, aerospace, or engineering.
The AI models are used to “create compelling personas at numerous stages of the job application and interview process” and then, once hired, GenAI is again used to assist in maintaining multiple roles, all earning revenue for the state.
AI was used by these hackers in a number of ways, including generating CVs and cover letters, conducting mock interviews via chat and webcam, translating, translating, and summarising messages, as well as managing communications for multiple jobs from different accounts and services.
To assist, the hackers have a sophisticated network of ‘facilitators’ that provide in-country support, technical infrastructure, and “legitimate business cover” - helping the North Koreans with domestic addresses, legitimate documents, and support during the recruitment process.
The campaign is growing ever more sophisticated, especially given that hackers are now using both sides of the job seeking process, targeting job seekers with fake interviews, in which they deliver malware and infostealers.
These elaborate schemes often start on legitimate platforms like LinkedIn or Upwork - with the attackers reaching out to victims to discuss potential opportunities. Anyone on the job hunt or in the hiring process should be extra vigilant about who they are speaking to, and should be careful not to download any unfamiliar software.
WhatsApp has commented on its controversial new Meta AI assistant
The messaging app says it's a "good thing" despite a mixed reception
WhatsApp has separately rolled out a new 'Advanced Chat Privacy' tool
WhatsApp has defended the wider rollout of its Meta AI assistant inside the popular messaging app, despite some significant pushback from users.
Earlier this month, Meta rolled out the AI assistant – represented by a blue ring in the bottom-right corner of your WhatsApp chats – across several new countries in the EU, the UK, and Australia.
Because WhatsApp is very popular in those regions – more so than the likes of Apple's iMessage – there was a vocal backlash to its arrival on platforms like Reddit, particularly as it isn't possible to turn the feature off. But WhatsApp has now commented on those concerns for the first time.
In a statement to the BBC, WhatsApp said: "We think giving people these options is a good thing and we're always listening to feedback from our users". It added that it considers the feature to be similar to other permanent features in the app, like 'channels'.
Although the Meta AI circle hovers permanently in your chats section, it doesn't actually have access to your chats. Meta's Help pages state that "your personal messages with friends and family are off limits", while the Meta AI chat window states that "it can only read messages people share with it".
Still, some privacy concerns remain, so this week WhatsApp introduced a new feature called "Advanced Chat Privacy" to help soothe any remaining concerns.
A privacy peace offering
(Image credit: WhatsApp)
While it isn't possible to turn off Meta AI in WhatsApp (it's also now integrated into the app's search bar), you will soon be able to use "Advanced Chat Privacy" to prevent others from using your chats in other AI apps.
The new setting, which is "rolling out to everyone on the latest version of WhatsApp", is designed to stop people from taking anything you share in WhatsApp outside of chats and groups. When it's turned on, your friends and contacts are blocked from "exporting chats, auto-downloading media to their phone, and using messages for AI features".
We haven't yet seen the feature in action, but you'll be able to turn it on by tapping on a chat name, then tapping the new "Advanced Chat Privacy" option. WhatsApp says this is also just the first version of the feature, with more protections en route to help you avoid a personal Signalgate fiasco.
That's likely to be a more popular move than baking Meta AI in WhatsApp, although a recent poll on the TechRadar WhatsApp channel shows the latter hasn't been universally condemned.
While the biggest chunk of our poll respondents (42%) said they would "never" use the Meta AI assistant in WhatsApp, a significant number (41%) said they would "maybe, sometimes" tap the blue ring, while 17% said they planned to use Meta's ChatGPT equivalent "regularly". Perhaps, like the prison walls in The Shawshank Redemption, we'll one day grow to depend on it.
Our CNET shopping experts search around the clock for the best deals, coupon codes and more on the internet, to help you maximize your savings with minimal effort.
Roku has launched two new weather-resistant home security cameras
The larger of the two has a battery life of up to two years on a single charge
Roku hasn't revealed prices, but the cameras will go on sale later this year
Streaming specialist Roku has launched a pair of new wireless security cameras that can send video footage straight to your phone or TV, letting you watch your yard without leaving the couch.
The Roku Battery Camera can run for up to six months on a single charge, while the Battery Camera Plus runs up to two years. Both cameras are weather-resistant, and can be set up indoors or out in a few seconds.
You can use the Roku Smart Home app or Roku Web View to customize your camera's settings, set up schedules, and receive notifications. The cameras can also be used as motion-detectors to activate some of the best smart lights or other connected devices.
(Image credit: Future)
Blink and you'll miss it
Real-world battery life will depend on which settings you choose and the weather (lithium-ion batteries tend to drain faster in cold conditions), but the Battery Camera Plus should be a serious rival to the Blink Outdoor 4, which also runs for up to two years before it needs recharging.
Both the Blink Outdoor 4 and Roku Battery Camera Plus boast 1080p resolution with motion detection and notifications, but the Roku camera also offers color night vision rather than black and white, which could give it the edge over the Blink model if the price is right.
You could also extend the Roku cameras' battery life even further by connecting an optional solar panel – something that's not possible with the Blink camera.
Roku has yet to announce official pricing for the two cameras, but it says they will be available "in the coming months". We're hoping to test both ourselves so we can see whether they deserve a place in our roundup of the best home security cameras to secure your smart home.
When ChatGPT uses the GPT-4.5 model, it can pass the Turing Test by fooling most people into thinking it's human
Nearly three-quarters of people in a study believed the AI was human during a five-minute conversation
ChatGPT isn't conscious or self-aware, though it raises questions around how to define intelligence
Artificial intelligence sounds pretty human to a lot of people, but usually, you can tell pretty quickly when you're engaging with an AI model. However, that may change as OpenAI's new GPT-4.5 model passed the Turing Test by fooling people into thinking it was a human over the course of a five-minute conversation. Not just a few people, but 73% of those participating in a University of California, San Diego study.
In fact, GPT-4.5 outperformed some of the actual human participants, who were accused of being AI in the blind test. Still, the fact that the AI did such a good impression of a human being that it seemed more human than actual humans says a lot about the brilliance of the machine or just how awkward humans can be.
Participants sat down for two back-to-back conversations with a human and a chatbot, not knowing which was which, and had to identify the AI afterward. To help GPT-4.5 succeed, the model had been given a detailed personality to mimic in a series of prompts. It was told to act like a young, slightly awkward, but internet-savvy introvert with a streak of dry humor.
With that little nudge toward humanity, GPT-4.5 became surprisingly convincing. Of course, as soon as the prompts were stripped away and the AI went back to a blank slate personality and history, the illusion collapsed. Suddenly, GPT-4.5 could only fool 36% of those studied. That sudden nosedive tells us something critical: this isn't a mind waking up. It’s a language model playing a part. And when it forgets its character sheet, it’s just another autocomplete.
Cleverness is not consciousness
The result is historic, no doubt. Alan Turing's proposal that a machine capable of conversing well enough to be mistaken for a human might therefore have human intelligence has been debated since he introduced it in 1950. Philosophers and engineers have grappled with the Turing Test and its implications, but suddenly, theory is a lot more real.
Turing didn't equate passing his test with proof of consciousness or self-awareness. That's not what the Turing Test really measures. Nailing the vibes of human conversation is huge, and the way GPT-4.5 evoked actual human interaction is impressive, right down to how it offered mildly embarrassing anecdotes. But if you think intelligence should include actual self-reflection and emotional connections, then you're probably not worried about the AI infiltration of humanity just yet.
GPT-4.5 doesn’t feel nervous before it speaks. It doesn’t care if it fooled you. The model is not proud of passing the test, since it doesn’t even know what a test is. It only "knows" things the way a dictionary knows the definition of words. The model is simply a black box of probabilities wrapped in a cozy linguistic sweater that makes you feel at ease.
The researchers made the same point about GPT-4.5 not being conscious. It’s performing, not perceiving. But performances, as we all know, can be powerful. We cry at movies. We fall in love with fictional characters. If a chatbot delivers a convincing enough act, our brains are more than happy to fill in the rest. No wonder 25% of Gen Z now believe AI is already self-aware.
There's a place for debate around this, of course. If a machine talks like a person, does it matter if it isn’t one? And regardless of the deeper philosophical implications, an AI that can fool that many people could be a menace in unethical hands. What happens when the smooth-talking customer support rep isn’t a harried intern in Tulsa, but an AI trained to sound disarmingly helpful, specifically to people like you, so that you'll pay for a subscription upgrade?
Maybe the best way to think of it for now is like a dog in a suit walking on its hind legs. Sure, it might look like a little entrepreneur on the way to the office, but it's only human training and perception that gives that impression. It's not a natural look or behavior, and doesn't mean banks will be handing out business loans to canines any time soon. The trick is impressive, but it's still just a trick.
The FBI is warning about an ongoing scheme targeting victims of online fraud
The victims are encouraged to reach out to a person on Telegram, posing as the chief of IC3
The person would try to gain access to the victims' financial accounts
Cybercriminals are preying on victims of online fraud, using their state of emotional distress to cause even more harm, the FBI has said, revealing it received more than a hundred reports of such attacks in the last two years.
In the campaign, cybercriminals would create fake social media profiles and join groups with other victims of online fraud. They would then claim to have recovered their money with the help of the FBI's Internet Complaint Center (IC3). This makes the ruse credible, since IC3 is an actual division of the FBI and serves as a central hub for reporting cybercrime.
Those who believe the claim are then advised to contact a person named Jaime Quin on Telegram. This person, claiming to be the Chief Director of IC3, is actually just part of the scheme. Quin will tell people who reach out that he recovered their funds and would then ask for access to their financial information, to steal even more money.
Keeper is a cybersecurity platform primarily known for its password manager and digital vault, designed to help individuals, families, and businesses securely store and manage passwords, sensitive files, and other private data.
It uses zero-knowledge encryption and offers features like two-factor authentication, dark web monitoring, secure file storage, and breach alerts to protect against cyber threats.
This is just one example of how the scam works. The FBI says that initial contact from the scammers can vary.
“Some individuals received an email or a phone call, while others were approached via social media or forums," it said. "Almost all complainants indicated the scammers claimed to have recovered the victim's lost funds or offered to assist in recovering funds. However, the claim is a ruse to revictimize those who have already lost money to scams."
To minimize the risk of falling victim to these scams, you should only reach out to law enforcement through official channels. Furthermore, you should keep in mind that law enforcement (especially those in executive positions) will never reach out to you this way, especially not to initiate contact.
Finally, the police will never ask for your password, financial information, or access to private services.
Looking Glass 27 offers 16 inches of depth in a one-inch thick frame
It can project up to 100 views across a 53-degree cone, perfect for shared use
Built for developers: create in Unity and deploy across platforms using an iPad
Looking Glass has announced a 27-inch 5K light field display which shows 3D content without any need for headsets or glasses.
Looking Glass 27 is designed for shared use, projecting 45 to 100 perspectives across a 53-degree view cone. At just one inch thick and capable of displaying 16 inches of virtual depth, it offers shared 3D experiences that were previously only possible with specialized gear.
Designed for plug-and-play deployment in offices or exhibitions, the display supports flexible VESA mounting and can even run entirely off an iPad. This alone reduces system-level costs by roughly 35%, while shrinking the overall hardware footprint.
A breakthrough moment for 3D?
Developers can build content in Unity on a PC and deploy it to iPads across multiple platforms via TestFlight or the App Store, streamlining workflows. It has broad support for web-based 3D pipelines and simplified cross-device compatibility.
"This is a breakthrough moment for 3D. With the new 27-inch display, we’ve combined major hardware and software advances to cut system costs and dramatically reduce compute requirements," said Shawn Frayne, CEO of Looking Glass. "It’s never been easier for developers and enterprises to build, test, and then deploy applications for their audiences in 3D."
With a pre-order price of $8,000 (currently 20% off), significantly lower than many would expect, Looking Glass 27 sets a new standard for professional-grade 3D displays. The pre-order window lasts until April 30th. You can see it in action in the video below.
Lenovo’s ThinkPad P14s Gen 6 supports up to 96GB DDR5 RAM, but only with Krackan Point CPU models
Rapid Charge delivers 80% power in 60 minutes using a 65W USB-C adapter
However its battery may struggle under heavy performance loads
Lenovo has announced its most powerful AMD laptop yet: the ThinkPad P14s Gen 6, which is set to launch with the 12-core AMD Ryzen AI 9 HX Pro 370, making it the company's first AMD-powered model to break past the eight-core ceiling.
Aimed at creative professionals and mobile users who need both AI processing and core-heavy performance, the ThinkPad P14s Gen 6 supports up to 96GB of DDR5-5600 RAM - but only in configurations using the Krackan Point CPUs, namely the Ryzen AI 5 Pro 340 and Ryzen AI 7 Pro 350.
That means the 12-core Strix Point model may be capped at 64GB of soldered memory. While it's a limitation, it still offers enough for demanding workloads like 3D rendering or Photoshop, making it a strong candidate for users searching for the best laptop for photo editing.
Poor choice of battery
While the processing capacity could place it among the best workstation contenders in terms of raw power, there’s a drawback: the model’s battery may struggle to match the chip’s power demands.
Weighing 1.39 kg (3.06 lbs) and measuring 10.9–16.3 mm thick, the device uses either a 57Whr or 52.5Whr battery, depending on the CPU.
Although both batteries are larger than the weedy 39.3Whr battery on the previous ThinkPad P14s Gen 5, they may still struggle under the load of the new, more powerful processors. However, the laptop supports Rapid Charge with a 65W adapter, capable of reaching 80% battery in 60 minutes.
It includes TÜV certifications for Eyesafe and Low Blue Light, a touchscreen, integrated PrivacyGuard, and will be available in different IPS variants offering up to 500-nit brightness.
Graphics are handled by an integrated AMD Radeon 890M, built on RDNA 3.5 architecture, delivering up to 32 TOPS and supported by AMD’s PRO Graphics Driver.
For connectivity, the device offers WiFi 7, Bluetooth 5.3, optional 5G or CAT16 WWAN with eSIM, and optional NFC.
Physical ports include two USB-C (Thunderbolt 4) ports, two USB-A (5 Gbps) ports, HDMI 2.1, RJ45 Ethernet, a headphone/mic combo jack, and optional Nano SIM and smart card readers.
Price and availability remain unclear, as the listing simply states “available soon.” Given that the T14 Gen 6 AMD models are unlikely to ship before May or June 2025, the P14s variant likely won’t hit shelves before summer either.
ChatGPT’s memory used to be simple. You told it what to remember, and it listened.
Since 2024, ChatGPT has had a memory feature that lets users store helpful context. From your tone of voice and writing style to your goals, interests, and ongoing projects. You could go into settings to view, update, or delete these memories. Occasionally, it would note something important on its own. But largely, it remembered what you asked it to. Now, that’s changing.
OpenAI, the company behind ChatGPT, is rolling out a major upgrade to its memory. Beyond the handful of facts you manually saved, ChatGPT will now draw from all of your past conversations to inform future responses by itself.
According to OpenAI, memory now works in two ways: “saved memories,” added directly by the user, and insights from “chat history,” which are the ones that ChatGPT will gather automatically.
This feature, called long-term or persistent memory, is rolling out to ChatGPT Plus and Pro users. However, at the time of writing, it’s not available in the UK, EU, Iceland, Liechtenstein, Norway, or Switzerland due to regional regulations.
The idea here is simple: the more ChatGPT remembers, the more helpful it becomes. It’s a big leap for personalization. But it’s also a good moment to pause and ask what we might be giving up in return.
A memory that gets personal
(Image credit: Shutterstock)
It’s easy to see the appeal here. A more personalized experience from ChatGPT means you explain yourself less and get more relevant answers. It’s helpful, efficient, and familiar.
“Personalization has always been about memory,” says Rohan Sarin, Product Manager at Speechmatics, an AI speech tech company. “Knowing someone for longer means you don’t need to explain everything to them anymore.”
He gives an example: ask ChatGPT to recommend a pizza place, and it might gently steer you toward something more aligned with your fitness goals – a subtle nudge based on what it knows about you. It's not just following instructions, it’s reading between the lines.
“That’s how we get close to someone,” Sarin says. “It’s also how we trust them.” That emotional resonance is what makes these tools feel so useful – maybe even comforting. But it also raises the risk of emotional dependence. Which, arguably, is the whole point.
“From a product perspective, storage has always been about stickiness,” Sarin tells me. “It keeps users coming back. With each interaction, the switching cost increases.”
OpenAI doesn’t hide this. The company's CEO,. Sam Altman, tweeted that memory enables “AI systems that get to know you over your life, and become extremely useful and personalized.”
That usefulness is clear. But so is the risk of depending on them not just to help us, but to know us.
Does it remember like we do?
(Image credit: Getty Images)
A challenge with long-term memory in AI is its inability to understand context in the same way humans do.
We instinctively compartmentalize, separating what’s private from what’s professional, what’s important from what’s fleeting. ChatGPT may struggle with that sort of context switching.
Sarin points out that because people use ChatGPT for so many different things, those lines may blur. “IRL, we rely on non-verbal cues to prioritize. AI doesn’t have those. So memory without context could bring up uncomfortable triggers.”
He gives the example of ChatGPT referencing magic and fantasy in every story or creative suggestion just because you mentioned liking Harry Potter once. Will it draw from past memories even if they're no longer relevant? “Our ability to forget is part of how we grow,” he says. “If AI only reflects who we were, it might limit who we become.”
Without a way to rank, the model may surface things that feel random, outdated, or even inappropriate for the moment.
Bringing AI memory into the workplace
Persistent memory could be hugely useful for work. Julian Wiffen, Chief of AI and Data Science at Matillion, a data integration platform with AI built in, sees strong use cases: “It could improve continuity for long-term projects, reduce repeated prompts, and offer a more tailored assistant experience," he says.
But he’s also wary. “In practice, there are serious nuances that users, and especially companies, need to consider.” His biggest concerns here are privacy, control, and data security.
“I often experiment or think out loud in prompts. I wouldn’t want that retained – or worse, surfaced again in another context,” Wiffen says. He also flags risks in technical environments, where fragments of code or sensitive data might carry over between projects, raising IP or compliance concerns. “These issues are magnified in regulated industries or collaborative settings.”
Whose memory is it anyway?
OpenAI stresses that users can still manage memory – delete individual memories that aren't relevant anymore, turn it off entirely, or use the new “Temporary Chat” button. This now appears at the top of the chat screen for conversations that are not informed by past memories and won't be used to build new ones either.
However, Wiffen says that might not be enough. “What worries me is the lack of fine-grained control and transparency,” he says. “It's often unclear what the model remembers, how long it retains information, and whether it can be truly forgotten.”
He’s also concerned about compliance with data protection laws, like GDPR: “Even well-meaning memory features could accidentally retain sensitive personal data or internal information from projects. And from a security standpoint, persistent memory expands the attack surface.” This is likely why the new update hasn't rolled out globally yet.
What’s the answer? “We need clearer guardrails, more transparent memory indicators, and the ability to fully control what’s remembered and what’s not," Wiffen explains.
Not all AI remembers the same
(Image credit: OpenAI & Google & Microsoft)
Other AI tools are taking different approaches to memory. For example, AI assistant Claude doesn’t store persistent memory outside your current conversation. That means fewer personalization features, but more control and privacy.
Perplexity, an AI search engine, doesn’t focus on memory at all – it retrieves real-time web information instead. Whereas Replika, AI designed for emotional companionship, goes the other way, storing long-term emotional context to deepen relationships with users.
So, each system handles memory differently based on its goals. And the more they know about us, the better they fulfill those goals – whether that’s helping us write, connect, search, or feel understood.
The question isn’t whether memory is useful; I think it clearly is. The question is whether we want AI to become this good at fulfilling these roles.
It’s easy to say yes because these tools are designed to be helpful, efficient, even indispensable. But that usefulness isn’t neutral, it’s intentional. These systems are built by companies that benefit when we rely on them more.
You wouldn’t willingly give up a second brain that remembers everything about you, possibly better than you do. And that’s the point. That’s what the companies behind your favorite AI tools are counting on.