Wednesday, October 30, 2024

Latest Tech News

Bandwidth limitations have become a significant bottleneck in AI and high-performance computing (HPC), as GPUs are underutilized due to bandwidth constraints, with nearly half of their computational power going to waste.

Nvidia is not expected to release optical interconnects for its NVLink protocol until the "Rubin Ultra" GPU compute engine launches in 2027.

This delay has led hyperscalers and cloud builders to explore ways to leapfrog Nvidia’s technology by adopting optical interconnects earlier.

Introducing ChromX

Xscape Photonics, an optical interconnect company spun out of research at Columbia University, is using photonics to realize scalable, high-bandwidth, energy-sustainable, and cost-effective solutions to enable the next generation of AI, ML, and simulation hardware.

This could help the AI industry save billions of dollars in wasted GPU capacity while also offering a path to greener, more sustainable AI infrastructures.

The Next Platform recently took a closer look at Xscape Photonics and spoke with the team behind it, including CEO Vivek Raghunathan, a former MIT researcher and Intel engineer.

Raghunathan highlighted the inefficiencies of current GPU systems, explaining that as scaling continues, the problem shifts "from GPU device-level performance to a system-level networking problem."

This is where Xscape’s technology comes into play. By converting electrical signals into optical ones directly within the GPU, Xscape can dramatically increase bandwidth while simultaneously reducing power consumption.

The startup’s solution, called the "ChromX" platform, uses a laser that can transmit multiple wavelengths of light simultaneously through a single optical fiber - up to 128 different wavelengths (or "colors"). This enables a 32-fold increase in bandwidth compared to lasers that use only four wavelengths.

The ChromX platform also relies on simpler modulation schemes like NRZ (Non-Return-to-Zero), which reduce latency compared to higher-order schemes like PAM-4 used in other systems such as InfiniBand and Ethernet. The ChromX platform is programmable, allowing it to adjust the number of wavelengths to match the specific needs of an AI workload, whether for training or inference tasks.

Raghunathan told The Next Platform’s Timothy Prickett Morgan, “The vision is to match in-package communication bandwidth to off-package communication escape bandwidth. And we think when we use our multicolor approach, we can match that so that giant datacenters - or multiple datacenters - behave as one big GPU.”

The potential impact of this technology is enormous. AI workloads consume vast amounts of energy, and with data center demand projected to triple by 2035, power grids may struggle to keep up. Xscape Photonics’ innovations could offer a vital solution, enabling AI systems to operate more efficiently and sustainably.

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/wD9LrH2

Tuesday, October 29, 2024

TP-Link Deco X90 Mesh Router Review: Top Speeds at a Great Discount

I used to only recommend the Deco X90 as an upgrade pick, but it's come down significantly in price.

from CNET https://ift.tt/9vM0FwB

Latest Tech News

A leading expert has raised critical questions about the validity of claims surrounding "Zettascale" and "Exascale-class" AI supercomputers.

In an article that delves deep into the technical intricacies of these terms, Doug Eadline from HPCWire explains how terms like exascale, which traditionally denote computers achieving one quintillion floating-point operations per second (FLOPS), are often misused or misrepresented, especially in the context of AI workloads.

Eadline points out that many of the recent announcements touting "exascale" or even "zettascale" performance are based on speculative metrics, rather than tested results. He writes, "How do these 'snort your coffee' numbers arise from unbuilt systems?" - a question that highlights the gap between theoretical peak performance and actual measured results in the field of high-performance computing. The term exascale has historically been reserved for systems that achieve at least 10^18 FLOPS in sustained, double-precision (64-bit) calculations, a standard verified by benchmarks such as the High-Performance LINPACK (HPLinpack).

Car comparison

As Eadline explains, the distinction between FLOPS in AI and HPC is crucial. While AI workloads often rely on lower-precision floating-point formats such as FP16, FP8, or even FP4, traditional HPC systems demand higher precision for accurate results.

The use of these lower-precision numbers is what leads to inflated claims of exaFLOP or even zettaFLOP performance. According to Eadline, "calling it 'AI zetaFLOPS' is silly because no AI was run on this unfinished machine."

He further emphasizes the importance of using verified benchmarks like HPLinpack, which has been the standard for measuring HPC performance since 1993, and how using theoretical peak numbers can be misleading.

The two supercomputers that are currently part of the exascale club - Frontier at Oak Ridge National Laboratory and Aurora at Argonne National Laboratory - have been tested with real applications, unlike many of the AI systems making exascale claims.

To explain the difference between various floating-point formats, Eadline offers a car analogy: "The average double precision FP64 car weighs about 4,000 pounds (1814 Kilos). It is great at navigating terrain, holds four people comfortably, and gets 30 MPG. Now, consider the FP4 car, which has been stripped down to 250 pounds (113 Kilos) and gets an astounding 480 MPG. Great news. You have the best gas mileage ever! Except, you don’t mention a few features of your fantastic FP4 car. First, the car has been stripped down of everything except a small engine and maybe a seat. What’s more, the wheels are 16-sided (2^4) and provide a bumpy ride as compared to the smooth FP64 sedan ride with wheels that have somewhere around 2^64 sides. There may be places where your FP4 car works just fine, like cruising down Inference Lane, but it will not do well heading down the FP64 HPC highway."

Eadline’s article serves as a reminder that while AI and HPC are converging, the standards for measuring performance in these fields remain distinct. As he puts it, "Fuzzing things up with 'AI FLOPS' will not help either," pointing out that only verified systems that meet the stringent requirements for double-precision calculations should be considered true exascale or zettascale systems.

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/bKkBHQa

Monday, October 28, 2024

How to Turn Off the Most Annoying Apple Intelligence Feature on iOS 18.1

Everyone else is raving about this new AI feature on the iPhone, but it continues to vex me. And if I don't like it, maybe you don't either.

from CNET https://ift.tt/as9pP6Y

Latest Tech News

If you want to buy a MacBook Air but your budget won’t quite stretch, Infinix has launched a lightweight, highly-affordable Windows 11 alternative which brings an impressive array of features promising to rival higher-end competitors.

The Inbook Air Pro+ weighs only 1kg, placing it firmly in the thin and light category - ideal for everyday use, performance, processing, and multitasking.

Equipped with Intel’s 13th Gen Core i5 processor (1334U), featuring 10 cores, a 4.6GHz turbo boost, and integrated Iris Xe graphics, the laptop comes with 16GB of LPDDR4X RAM and 512GB of M.2 NVMe SSD storage - doubling the memory and storage capacity of the entry-level MacBook Air. An advanced cooling system with 79 precision-designed 0.2mm S-shaped fan blades prevents the device from overheating when under load.

Short battery life

One of the highlights of the Air Pro+ is its 14-inch OLED 2.8K (2880 x 1800) display. It’s rare to see an OLED panel at this price, so that alone is a great selling point. With a 16:10 aspect ratio, a peak brightness of 440 nits, and a 120Hz refresh rate, it promises vibrant, sharp visuals. The display also supports 100% of both the sRGB and DCI-P3 color gamuts, ensuring accurate color reproduction - ideal for creative professionals.

The Air Pro+ sports all the ports you expect to see on a modern laptop, such as USB-C, HDMI 1.4, and USB 3.2. It also comes with a Full HD+ IR webcam supporting face recognition and a backlit keyboard. Wireless connectivity is provided in the form of WiFi 6 and Bluetooth 5.2.

The 57Wh battery life, as per Infinix’s claims, lasts up to 8–10 hours, which should be just enough to get you through a full workday. In comparison, the Apple MacBook Air offers up to 18 hours. The Air Pro+ does at least support 65W Type-C fast charging.

Currently priced at 49,990 Indian rupees (approximately $600) on Flipkart, the Infinix Air Pro+ provides impressive specs for its price, offering a tempting option for budget-conscious buyers who need performance and portability without breaking the bank.

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/wUi0Mfy

Sunday, October 27, 2024

Best Pixel 9 Deals: Save on a New Phone When You Trade In

The Pixel 9 series features amazing upgrades over the 8 series. We've rounded up deals that can help you save when you trade in.

from CNET https://ift.tt/dr3cg1Z

Latest Tech News

Japan is often seen as a global leader in cutting-edge technology, known for innovations in robotics, electronics, and high-speed trains - however, the country is also known for its tendency to hold onto older technology long after it has been abandoned elsewhere.

Only recently has Japan begun to phase out floppy disks in government offices, and far too many of its laptops and devices still come equipped with legacy features like VGA connectors. Most recently, we covered a PCI Express adapter that adds a parallel port to modern PCs, allowing buyers to connect long-forgotten devices like HP LaserJet or dot matrix printers. For bonus nostalgia points, the driver for it comes on a CD, and it’s compatible with Windows XP and newer.

But now, Planex Communications has embraced Japan’s unwillingness to fully move on with the release of its PL-US56K2(A) USB-connected 56K modem, ideal for anyone who still needs to dial into the internet like it’s 1999.

BEEEEEE-DEEEE-DEEEEEE-KEEEEEE-SHHHHH-BRRRRR-DEEEEE-KRRRRRR-WEEEEEEEEE-SHHHHHHH

For around 5,980 yen (about $40) on Amazon, this device is designed for PCs without built-in modems, enabling access to analog public phone lines for internet connectivity, data transmission, and even faxing - all without needing to install any drivers.

The modem supports the ITU-T V.90 and V.92 protocols, offering a maximum theoretical data reception speed of 56Kbps and a transmission speed of up to 33.6Kbps. At those speeds, you won’t be streaming HD videos, but you can at least check your emails or send a fax while reflecting on how far technology has come - or hasn’t, depending on your perspective.

Powered via USB 2.0, the PL-US56K2(A) is small and light, weighing just 28 grams and measuring a compact 25mm x 75mm x 18mm, making it easy to tuck away next to your other relics from the late '90s like your floppy disks, PalmPilot, and that stack of AOL free trial CDs.

The PL-US56K2(A) isn’t likely to take the world by storm, but it’s a handy solution for anyone still navigating the world of dial-up or needing to send the occasional fax.

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/hQJuemT

Saturday, October 26, 2024

6 Wellness and Fitness TikTok Trends Experts Want to 'De-Influence'

If you’re confused about wellness and fitness trends on TikTok, you’re not alone. Here’s how to determine what’s true and what's not.

from CNET https://ift.tt/nVaSmot

La Liga Soccer Livestream: How to Watch Real Madrid vs. Barcelona El Clásico From Anywhere

Spanish rivals meet, with Los Blancos on the brink of history.

from CNET https://ift.tt/TcOGWeK

Best Internet Providers in Chandler, Arizona

While there aren't very many internet options in Chandler, there are high speeds. Here are your choices.

from CNET https://ift.tt/fhjro76

Friday, October 25, 2024

Amazon Drops Roomba Robot Vacuum by 39% to New Record Low Price

The $105 discount is even higher than the Labor Day price cut for the robot vacuum and mop.

from CNET https://ift.tt/etivC7u

Thursday, October 24, 2024

Try Out Online Mattresses in Store Before You Buy Them: How to Test Casper, Purple and More

Buying a mattress online doesn't mean you can't test it before you buy. Here's what you need to know.

from CNET https://ift.tt/xz1EKO0

Latest Tech News

A new leak claims AMD’s upcoming Ryzen 7 9800X3D processor will see an 8% performance boost over the Ryzen 9 7800X3D — in other words, the chip that is regarded as one of the best gaming CPUs on the market now looks set to be dethroned.

This could stand as a significant boost for PC gamers, especially considering the improvement in 3D V-Cache, which was a downside in our AMD Ryzen 7 7800X3D review. The leak from VideoCardz contains a marketing description of the 9800X3D, revealing ‘Next-Gen 3D V-Cache’ which points towards better thermal performance when operating at higher clock speeds.

VideoCardz also reports that it’s expected to have a 15% enhancement over the 7800X3D in multi-threaded workloads, ideal for multi-tasking duties and video editing, using 8 cores and 16 threads — this strikes a balance for both content creators and gamers, along with the aforementioned 3D V-Cache improvements. These pivotal enhancements being leaked ahead of the 9800X3D’s confirmed November 7th launch and AMD’s full spec reveal gives gamers some insight into what to expect.

Will the 9800X3D be worth the upgrade?

While we have yet to see the full scope of what the Ryzen 7 9800X3D will have to offer specification-wise, the leaked marketing description gives us a great idea of what is in store for PC gamers. Considering the aforementioned 8% boost in gaming performance and room for slightly higher clock speeds up to 5.2GHz compared to the previous 5GHz, the switch is certainly worth contemplating - and for gamers who have yet to upgrade to an AM5 chip, this performance boost could finally be the push they need.

Despite the improvements listed in the leak, it’s important to note that there is only so much that can be done when it comes to poor game optimization on PC — an upgrade can help specifically with reducing stuttering in certain games, but it’s not the silver bullet for achieving optimal performance. Besides, most modern games are far more dependent on your GPU and available VRAM.

If you’re using the best GPUs on the market, like the Nvidia RTX 4080 Super or RTX 4090, any kind of upgrade isn’t entirely urgent, but we’ll have to wait to see everything AMD’s new processor has to offer before we can pass judgment on the value of this new chip.

You might also like...



from TechRadar - All the latest technology news https://ift.tt/RwYsiFm

Wednesday, October 23, 2024

OnePlus 12 Deals: Enjoy Money Off With Trade-Ins

The OnePlus 12 phones are already pretty affordable, but you can still save money on them with these deals.

from CNET https://ift.tt/wGn6Hi4

Latest Tech News

Back in March 2024, we reported how British AI startup Literal Labs was working to make GPU-based training obsolete with its Tseltin Machine, a machine learning model that uses logic-based learning to classify data.

It operates through Tsetlin automata, which establish logical connections between features in input data and classification rules. Based on whether decisions are correct or incorrect, the machine adjusts these connections using rewards or penalties.

Developed by Soviet mathematician Mikhail Tsetlin in the 1960s, this approach contrasts with neural networks by focusing on learning automata, rather than modeling biological neurons, to perform tasks like classification and pattern recognition.

Energy-efficient design

Now, Literal Labs, backed by Arm, has developed a model using Tsetlin Machines that despite its compact size of just 7.29KB, delivers high accuracy and dramatically improves anomaly detection tasks for edge AI and IoT deployments.

The model was benchmarked by Literal Labs using the MLPerf Inference: Tiny suite and tested on a $30 NUCLEO-H7A3ZI-Q development board, which features a 280MHz ARM Cortex-M7 processor and doesn’t include an AI accelerator. The results show Literal Labs’ model achieves inference speeds that are 54 times faster than traditional neural networks while consuming 52 times less energy.

Compared to the best-performing models in the industry, Literal Labs’ model demonstrates both latency improvements and an energy-efficient design, making it suitable for low-power devices like sensors. Its performance makes it viable for applications in industrial IoT, predictive maintenance, and health diagnostics, where detecting anomalies quickly and accurately is crucial.

The use of such a compact and low-energy model could help scale AI deployment across various sectors, reducing costs and increasing accessibility to AI technology.

Literal Labs says, “Smaller models are particularly advantageous in such deployments as they require less memory and processing power, allowing them to run on more affordable, lower-specification hardware. This not only reduces costs but also broadens the range of devices capable of supporting advanced AI functionality, making it feasible to deploy AI solutions at scale in resource-constrained settings.”

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/bp73Yy0

Heat Domes and Surging Grid Demand Threaten US Power Grids with Blackouts

A new report shows a sharp increase in peak electricity demand, leading to blackout concerns in multiple states. Here's how experts say ...