Tuesday, March 25, 2025

WWDC 2025: Apple Confirms June 9 Date for Next Major Event

The tech giant is expected to reveal iOS 19 and other major software updates at its annual developer conference.

from CNET https://ift.tt/W2vPYe3

Latest Tech News

Apple might want to put a camera or two on your next Apple Watch, ostensibly to assist its AI in interpreting your environment and, perhaps, acting on your behalf: "There's a hill up ahead! You might want to accelerate your running pace, but watch out for that puddle; it might be a pothole!"

That sounds useful, but do we need a smartwatch to do a job best left to our eyes? You'll see that hill, you'll take note of the puddle, and subconsciously plan a route around it. Why would you need a camera on your wrist?

Forgive me if I am a bit against the whole concept of a wearable camera. I think that unless you're a police officer who has to record all their interactions with the public (see The Rookie for details), a chest-bound camera is a bad idea. I think most Humane AI Pin wearers (and Humane AI) quickly discovered this.

Cameras on glasses aren't as bad, perhaps because they're so close to your eyes where you are looking at and making mental notes about what you see anyway. There are privacy concerns though, and when I've worn Ray-Ban Meta Smart Glasses, I've had a few people ask if I'm recording them. There's a little light on the frame that tells them as much, but I get the concern. No one wants to be recorded or have their picture taken without their explicit permission.

Never a good idea

We've seen cameras on smartwatches before. Back in 2013, Samsung unveiled the beefy Samsung Galaxy Gear, which I wore and reviewed. Samsung's idea for an on-wrist camera was, shall I say, unusual.

Instead of integrating the camera into the smart watch's body, Samsung stuffed it into the wristband. This was one bad idea on top of another. By placing the camera on the wristband, it forced you to position your wrist just right to capture a photo, using the smartwatch display as a viewfinder. Moreover, there was concern about damaging the wristband, which could lead to ruining the 2MP camera. It took, by the way, just passable photos.

Apple's apparent idea for a smartwatch camera is less about capturing a decent photo and more about ambient awareness. Information that one or more cameras can glean about your environment could inform Apple Intelligence – assuming Apple Intelligence is, by then, what Apple's been promising all along.

Powerful AI works best with data, both training to build the models and real time for analysis by those same models. Our best iPhones and best smartwatches are full of sensors that tell these devices where they are, where they're going, how fast they're moving, and if you've taken a fall or been in a car crash while carrying or wearing them. The watch has no camera, and your phone does not use its camera to build a data picture unless you ask it to.

Currently, you can squeeze your Camera Control button on the iPhone 16 and enable Visual Intelligence. This lets you take a picture and ask ChatGPT or Google Search to analyze it.

An eye on your wrist

A camera on your smartwatch, though, might always be on and trying, even as you pump your arms during a brisk run, to tell you about what's around and in front of you.

It might be looking at the people running toward you, and could possibly identify people on the fly, assuming it can get a clear enough shot. The watch could then connect to your phone or AirPods and identify people: "That's Bob Smith. According to his LinkedIn, he works in real estate." I'm not sure how those other people would feel about that, though.

I get that some of this sounds very cool and futuristic, but are we really meant to know that much about everything around us? Wouldn't it be better to explore what we want to with our eyes and ignore the rest? Exactly how much information can a human take?

It needs this but...

There are no guarantees that this will happen. It's just a rumor from Bloomberg News, but it makes sense.

It's high time for Apple to do the first truly significant Apple Watch redesign in a decade. Apple also needs some exciting new technology to remind people it can still innovate. Plus, more hardware sensors open the door to more powerful Apple Intelligence, and with all the recent missteps in that space, Apple is in dire need of an AI win.

I'm fine with all of that, as long as it does not involve putting cameras on my Apple Watch.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/j514KU0

Monday, March 24, 2025

Best Internet Providers in Staten Island, New York

CNET's connectivity experts have found the best ISPs in Staten Island -- top plans for speed, price and reliable coverage.

from CNET https://ift.tt/fg5vbj6

Latest Tech News


  • Nvidia’s DGX Station is powered by the GB300 Grace Blackwell Ultra
  • OEMs are making their own versions – Dell’s is the Pro Max with GB300
  • HP’s upcoming GB300 workstation will be the ZGX Fury AI Station G1n

Nvidia has unveiled two DGX personal AI supercomputers powered by its Grace Blackwell platform.

The first of these is DGX Spark (previously called Project Digits), a compact AI supercomputer that runs on Nvidia’s GB10 Grace Blackwell Superchip.

The second is DGX Station, a supercomputer-class workstation that resembles a traditional tower and is built with the Nvidia GB300 Grace Blackwell Ultra Desktop Superchip.

Dell and HP reveal their versions

The GB300 features the latest-generation Tensor Cores and FP4 precision, and the DGX Station includes 784GB of coherent memory space for large-scale training and inferencing workloads, connected to a Grace CPU via NVLink-C2C.

The DGX Station also features the ConnectX-8 SuperNIC, designed to supercharge hyperscale AI computing workloads.

Nvidia’s OEM partners - Asus, HP, and Dell - are producing DGX Spark rivals powered by the same GB10 Superchip. HP and Dell are also preparing competitors to the DGX Station using the GB300.

Dell has shared new details about its upcoming AI workstation, the Pro Max with GB300 (its DGX Spark version is called Pro Max with GB10).

The specs for its supercomputer-class workstation include 784GB of unified memory, up to 288GB of HBM3e GPU memory, and 496GB of LPDDR5X memory for the CPU.

The system delivers up to 20,000 TOPS of FP4 compute performance, making it well suited for training and inferencing LLMs with hundreds of billions of parameters.

HP’s version of the DGX Station is called the ZGX Fury AI Station G1n. Z by HP is now one of the company’s product lines, and the “n” at the end of the name signifies that it’s powered by an Nvidia processor - in this case, the GB300.

HP says the ZGX Fury AI Station G1n “provides everything needed for AI teams to build, optimize, and scale models while maintaining security and flexibility,” noting that it will integrate into HP’s broader AI Station ecosystem, alongside the previously announced ZGX Nano AI Station G1n (its DGX Spark alternative).

HP is also expanding its AI software tools and support offerings, providing resources designed to streamline workflow productivity and enhance local model development.

Pricing for the DGX Station and the Dell and HP workstations isn’t known yet, but they obviously aren’t going to be cheap. Pricing for the tiny DGX Spark starts at $3,999, and the larger machines will cost significantly more.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/suEya50

Sunday, March 23, 2025

Today's NYT Connections Hints, Answers and Help for March 24, #652

Hints and answers for Connections for March 24, No. 652.

from CNET https://ift.tt/6zfHPhc

Frankenstein Fraud: How to Protect Yourself Against Synthetic Identity Fraud

Criminals can stitch together pieces of your personal data to create an entirely new identity. Here's how to stop them.

from CNET https://ift.tt/R4Moub1

Latest Tech News


  • Asus' new Ascent GX10 brings AI supercomputing power directly to developers
  • Promises 1000 TOPS of AI processing and can handle models up to 200 billion parameters
  • It's cheaper than Nvidia DGX Spark, with less storage but similar performance

AI development is getting ever more demanding, and Asus wants to bring high-performance computing straight to the desks of developers, researchers, and data scientists with the Ascent GX10, a compact AI supercomputer powered by Nvidia’s Grace Blackwell GB10 Superchip.

Asus’s rival to Nvidia’s DGX Spark (previously Project Digits) is designed to handle local AI workloads, making it easier to prototype, fine-tune, and run impressively large models without relying entirely on cloud or data center resources.

The Ascent GX10 comes with 128GB of unified memory, and the Blackwell GPU with fifth-generation Tensor Cores and FP4 precision support means it can deliver up to 1000 TOPS of AI processing power. It also includes a 20-core Grace Arm CPU, which speeds up data processing and orchestration for AI inferencing and model tuning. Asus says it will allow developers to work with AI models of up to 200 billion parameters without running into major bottlenecks.

Powerful yet compact

“AI is transforming every industry, and the Asus Ascent GX10 is designed to bring this transformative power to every developer’s fingertips,” said KuoWei Chao, General Manager of Asus IoT and NUC Business Group.

“By integrating the Nvidia Grace Blackwell Superchip, we are providing a powerful yet compact tool that enables developers, data scientists, and AI researchers to innovate and push the boundaries of AI right from their desks.”

Asus has built the GX10 with NVLink-C2C, which provides more than five times the bandwidth of PCIe 5.0, allowing the CPU and GPU to share memory efficiently, improving performance across AI workloads.

The system also comes with an integrated ConnectX network interface, so two GX10 units can be linked together to handle even larger models, such as Llama 3.1 with 405 billion parameters.

Asus says the Ascent GX10 will be available for pre-order in Q2 2025. Pricing details have not yet been confirmed by Asus, but Nvidia says it will cost $2999 and come with 1TB of storage.

In comparison, Nvidia’s own DGX Spark is a thousand dollars more ($3999) and comes with 4TB of storage.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/qQOodPw

Saturday, March 22, 2025

Best Internet Providers in Pensacola, Florida

If you are looking for fast and reliable internet in Pensacola, consider these options.

from CNET https://ift.tt/lQfhuG6

What's the Future of FAFSA and Financial Aid if the Department of Education Closes?

President Trump wants to shift federal student aid to the Small Business Administration, but experts say it's not that simple.

from CNET https://ift.tt/PiWVt3l

Best Facial Sunscreens of 2025, Tested and Chosen From 50 Top Brands

Your skin is an important organ and you need to keep it protected from the harsh UV rays of the sun if you don't want wrinkles. Here are the best sunscreens, picked by our experts.

from CNET https://ift.tt/BK8ZFNY

Latest Tech News


  • HP ZBook Fury G1i is a powerful 18-inch mobile workstation
  • It's powered by up to an Intel Core Ultra 9 285HX and next-gen Nvidia RTX graphics
  • There's also a 16-inch model available with same high-end specs and features

It’s a personal preference, but I’ve always liked laptops with bigger screens. That means 16-inches for me, but HP thinks 18-inch laptops are what professionals should be aiming for if they are looking to replace their desktop PCs and get a solid productivity boost.

Billed as the world’s most powerful 18-inch mobile workstation, the HP ZBook Fury G1i 18” still manages to fit into a 17-inch backpack.

That extra 2-inches gives you roughly 30% more space to work with, which can come in handy when handling complex datasets, editing high-resolution media, or working across multiple windows.

Three-fan cooling

HP is pitching the laptop at developers and data scientists who need to train and run LLMs directly on the machine.

The Fury G1i 18” runs on Intel’s latest Core Ultra processors, up to the top-end Core Ultra 9 285HX, with peak speeds of 5.5GHz. These chips also include an NPU with up to 13 TOPS of AI performance. HP says the machine will support next-gen Nvidia RTX GPUs.

There’s support for up to 192GB of DDR5 memory and up to 16TB of PCIe Gen5 NVMe storage. Connectivity includes Thunderbolt 5, HDMI 2.1, USB-A ports, an SD card slot, and Ethernet.

The 18-inch display has a WQXGA (2560x1600) resolution, coupled with a fast 165Hz refresh rate, trading pixel density for smoother motion. Thermal performance is handled by a redesigned three-fan cooling system, along with HP’s Vaporforce tech, allowing up to 200W TDP without throttling under sustained load.

Other features include a spill-resistant RGB-backlit keyboard, four Poly Studio speakers, dual-array microphones, and an optional IR camera for facial login.

The Fury G1i is also available in a 16-inch model for anyone who feels 18-inches is too big to lug around. Pricing and availability details for both models is expected shortly.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/VN4dClF

Friday, March 21, 2025

Best Internet Providers in St. Paul, Minnesota

If you're looking for fiber in St. Paul, you've got it. There are also lots of other great budget and speed options.

from CNET https://ift.tt/B4wpPad

Latest Tech News


  • AMD targets Nvidia’s Blackwell with upcoming Instinct MI355X accelerator
  • Oracle plans massive 30,000-unit MI355X cluster for high-performance AI workloads
  • That’s in addition to Stargate, Oracle’s 64,000-GPU Nvidia GB200 cluster

While AI darling Nvidia continues to dominate the AI accelerator market, with a share of over 90%, its closest rival, AMD, is hoping to challenge the Blackwell lineup with its new Instinct MI355X series of GPUs.

The MI355X, now expected to arrive by mid-2025, is manufactured on TSMC’s 3nm node and built on AMD's new CDNA 4 architecture. It will feature 288GB of HBM3E memory, bandwidth of up to 8TB/sec, and support for FP6 and FP4 low-precision computing, positioning it as a strong rival to Nvidia’s Blackwell B100 and B200.

In 2024, we reported on a number of big wins for AMD, which included shipping thousands of its MI300X AI accelerators to Vultr, a leading privately-held cloud computing platform, and to Oracle. Now, the latter has announced plans to build a cluster of 30,000 MI355X AI accelerators.

Stargate

This latest news was revealed during Oracle’s recent Q2 2025 earnings call, where Larry Ellison, Chairman and Chief Technology Officer, told investors, “In Q3, we signed a multi-billion dollar contract with AMD to build a cluster of 30,000 of their latest MI355X GPUs.”

Although he didn’t go into further detail beyond that, Ellison did talk about Project Stargate, saying, “We are in the process of building a gigantic 64,000 GPU liquid-cooled Nvidia GB200 cluster for AI training.”

He later added, “Stargate looks to be the biggest AI training project out there, and we expect that will allow us to grow our RPO even higher in the coming quarters. And we do expect our first large Stargate contract fairly soon.”

When questioned further about Stargate by a Deutsche Bank analyst, Ellison gave a reply that could just as easily apply to the cluster of MI355X AI accelerators Oracle is planning to build.

"The capability we have is to build these huge AI clusters with technology that actually runs faster and more economically than our competitors. So it really is a technology advantage we have over them. If you run faster and you pay by the hour, you cost less. So that technology advantage translates to an economic advantage which allows us to win a lot of these huge deals,” he said.

Ellison also touched on Oracle’s data center strategy, saying, “So, we can start our data centers smaller than our competitors and then we grow based on demand. Building these data centers is expensive, and they’re really expensive if they’re not full or at least half full. So we tend to start small and then add capacity as demand arises.”

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/uAE2KWd

Thursday, March 20, 2025

Watch UEFA Nations League Soccer: Livestream Netherlands vs. Spain From Anywhere

La Roja look to maintain their unbeaten record in the tournament as they travel to Rotterdam for this quarterfinal clash.

from CNET https://ift.tt/q3mRnuT

Latest Tech News


  • Kioxia launches 122.88TB SSD with PCIe Gen5 and dual-port support
  • The LC9 Series NVMe SSD is designed for AI workloads and hyperscale storage
  • The new drive comes in a compact 2.5-inch form factor

After nearly seven years at the top, Nimbus Data’s massive Exadrive 100TB 2.5-inch SSD has been dethroned by Kioxia, which has unveiled a new 122.88TB model that not only offers a higher storage capacity but also supports PCIe Gen5, a first for this category.

Several companies have previously announced 120TB-class SSDs, including Solidigm, but Kioxia's LC9 Series 122.88TB NVMe SSD stands out by pairing its ultra-high capacity with a compact 2.5-inch form factor and a next-gen interface with dual-port capability for fault tolerance or connectivity to multiple compute systems.

"AI workloads are stretching the capabilities of data storage, asking for larger capacities and swifter access to the extensive datasets found in today's data lakes, and Kioxia is ready to offer the necessary advanced technologies including 2 Tb QLC BiCS FLASH generation 8 of 3D flash memory, CBA and the complimenting AiSAQ," said Axel Störmann, VP & Chief Technology Officer for SSD and Embedded Memory products at Kioxia Europe GmbH.

Supporting AI system developers' needs

The 122.88TB SSD is aimed at hyperscale storage systems, AI workloads, and other data-intensive applications that rely on capacity and speed. There’s no word on availability or pricing yet, but the company does plan to showcase the new drive at "various upcoming conferences".

"This new LC9 Series NVMe SSD is an instrumental Kioxia product expansion that will support AI system developers' needs for high-capacity storage, high performance, and energy efficiency for applications such as AI model training, inference, and Retrieval-Augmented Generation on a vaster scale," Störmann said.

Reporting on the new SSD, ServeTheHome notes, “This is a hot segment of the market, and it is great to see Kioxia joining. As AI clusters get larger, the shared storage tier is usually measured in Exabytes. Teams have found that replacing hard drives with SSDs often reduces power, footprint, and TCO compared to running hybrid arrays. Moving from lower-capacity drives to the 122.88TB capacity in a 2.5-inch drive form factor really highlights the advantage of flash in these systems.”

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/bLdt9Ye

Heat Domes and Surging Grid Demand Threaten US Power Grids with Blackouts

A new report shows a sharp increase in peak electricity demand, leading to blackout concerns in multiple states. Here's how experts say ...