Tuesday, December 24, 2024

Latest Tech News


  • $250 GPU card is competitive with both the GeForce 4060 and the RX 7600 on numerous benchmarks
  • However, both are set to be replaced by new models launching at CES 2025
  • Driver updates from Intel will hopefully drive the performance of the B580 even further

Over two years after its first discrete GPU release, Intel has launched the Arc B580 “Battlemage,” marking its second generation of dedicated graphics cards.

The B580, which will mostly be sold through add-in-board (AIB) partners like Maxon, Sparkle, and ASRock, features Intel’s updated Xe2 architecture.

It offers efficiency improvements and second-generation Ray Tracing Units (RTUs) alongside enhanced XMX engines, Intel’s counterpart to Nvidia’s Tensor cores.

Unfortunate timing

Puget Systems recently put the $250 GPU card through its paces and found it competes effectively with Nvidia’s GeForce RTX 4060 and AMD’s Radeon RX 7600 across a range of benchmarks. With 12GB of VRAM, the B580 certainly stands out in the budget category, surpassing the RTX 4060’s 8GB at a lower price point.

This additional memory gives it an edge in workflows demanding higher VRAM capacity, such as GPU effects in Premiere Pro and Unreal Engine, but performance in creative applications delivered mixed, and surprising, results.

In graphics-heavy tasks like GPU effects for DaVinci Resolve, Adobe After Effects, and Unreal Engine, the B580 impressed, often matching or exceeding more expensive GPUs. Puget Systems noted the B580 matched the RTX 4060 across resolutions in Unreal Engine while benefiting from its superior VRAM capacity.

Unfortunately, inconsistencies in media acceleration held it back in other areas. In Premiere Pro, for example, Intel’s hardware acceleration for HEVC codecs lagged behind expectations, with Puget Systems observing slower results compared to software-based processing. These issues appear to be driver-related, something Intel is likely to address in upcoming updates.

Shortly after it launched in 2022, Puget Systems tested the Arc A750 (8GB and 16GB models) and came away disappointed. The B580 shows clear improvements over its predecessor, and Intel’s continued driver development will no doubt extend the performance of the B580 even further. Intel's release timing is unfortunate, however.

While the B580 is a strong contender in the entry-level segment right now, Nvidia and AMD are expected to reveal replacements for the GeForce 4060 and the RX 7600 at CES 2025, and those new models are likely to diminish the appeal, and competitiveness, of Intel's new GPU significantly.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/6WrITmd

Monday, December 23, 2024

A Week Left to Spend Your 2024 FSA Money: How It Works and What You Can Buy

If you don't use your Flexible Spending Account funds, you could lose them at the end of the year.

from CNET https://ift.tt/U6F10uA

Latest Tech News


  • HBM is fundamental to the AI revolution as it allows ultra fast data transfer close to the GPU
  • Scaling HBM performance is difficult if it sticks to JEDEC protocols
  • Marvell and others wants to develop a custom HBM architecture to accelerate its development

Marvell Technology has unveiled a custom HBM compute architecture designed to increase the efficiency and performance of XPUs, a key component in the rapidly evolving cloud infrastructure landscape.

The new architecture, developed in collaboration with memory giants Micron, Samsung, and SK Hynix, aims to address limitations in traditional memory integration by offering tailored solutions for next-generation data center needs.

The architecture focuses on improving how XPUs - used in advanced AI and cloud computing systems - handle memory. By optimizing the interfaces between AI compute silicon dies and High Bandwidth Memory stacks, Marvell claims the technology reduces power consumption by up to 70% compared to standard HBM implementations.

Moving away from JEDEC

Additionally, its redesign reportedly decreases silicon real estate requirements by as much as 25%, allowing cloud operators to expand compute capacity or include more memory. This could potentially allow XPUs to support up to 33% more HBM stacks, massively boosting memory density.

“The leading cloud data center operators have scaled with custom infrastructure. Enhancing XPUs by tailoring HBM for specific performance, power, and total cost of ownership is the latest step in a new paradigm in the way AI accelerators are designed and delivered,” Will Chu, Senior Vice President and General Manager of the Custom, Compute and Storage Group at Marvell said.

“We’re very grateful to work with leading memory designers to accelerate this revolution and, help cloud data center operators continue to scale their XPUs and infrastructure for the AI era.”

HBM plays a central role in XPUs, which use advanced packaging technology to integrate memory and processing power. Traditional architectures, however, limit scalability and energy efficiency.

Marvell’s new approach modifies the HBM stack itself and its integration, aiming to deliver better performance for less power and lower costs - key considerations for hyperscalers who are continually seeking to manage rising energy demands in data centers.

ServeTheHome’s Patrick Kennedy, who reported the news live from Marvell Analyst Day 2024, noted the cHBM (custom HBM) is not a JEDEC solution and so will not be standard off the shelf HBM.

“Moving memory away from JEDEC standards and into customization for hyperscalers is a monumental move in the industry,” he writes. “This shows Marvell has some big hyperscale XPU wins since this type of customization in the memory space does not happen for small orders.”

The collaboration with leading memory makers reflects a broader trend in the industry toward highly customized hardware.

“Increased memory capacity and bandwidth will help cloud operators efficiently scale their infrastructure for the AI era,” said Raj Narasimhan, senior vice president and general manager of Micron’s Compute and Networking Business Unit.

“Strategic collaborations focused on power efficiency, such as the one we have with Marvell, will build on Micron’s industry-leading HBM power specs, and provide hyperscalers with a robust platform to deliver the capabilities and optimal performance required to scale AI.”

More from TechRadar Pro



from Latest from TechRadar US in News,opinion https://ift.tt/9UONTXo

Sunday, December 22, 2024

Best Internet Providers in Las Vegas, Nevada

Las Vegas has a decent variety of options for good internet. This list will help you make your choice based on speed, value and availability.

from CNET https://ift.tt/DVQl7xY

Latest Tech News


  • First look at Dell Pro Max 18 Plus emerges in new images
  • Pictures show a completely redesigned mobile workstation laptop
  • Pro Max could either replace popular Precision range or be a whole new range, offering up to 256GB RAM and up to 16TB SSD

Leaked details have suggest Dell is developing a new addition to its workstation offerings designed to deliver high-performance capabilities for professional workloads.

Available in two sizes, the Dell Pro Max 18 Plus is expected to debut officially at CES 2025 and could either replace the popular Precision range or form an entirely new lineup.

The device allegedly features an 18-inch display, while the Pro Max 16 Plus provides a smaller 16-inch alternative with similar specifications. According to information shared by Song1118 on Weibo, which includes Dell marketing slides, the laptops will be powered by Intel’s upcoming Core Ultra 200HX “Arrow Lake-HX” CPUs. For graphics, the series will reportedly feature Nvidia’s Ada-based RTX 5000-class workstation GPUs, though the exact model isn’t named in the leaked documents.

Triple-fan cooling system

The Pro Max series is set to offer up to 200 watts for the CPU/GPU combination in the 18-inch version and 170 watts in the 16-inch model. VideoCardz notes that while we have already seen much higher targets in ultra-high-end gaming machines, “this would be the first laptop confirmed to offer 200W for a next-gen Intel/Nvidia combo.”

The laptops will reportedly support up to 256GB of CAMM2 memory. The 18-inch model can accommodate up to 16TB of storage via four M.2 2280 SSD slots, while the 16-inch version supports 12TB with three slots. The heat generated by these high-power components will be managed by an “industry first” triple-fan cooling system.

Additional features look to include a magnesium alloy body to reduce weight, an 8MP camera, and a tandem OLED display option. Connectivity options include Thunderbolt 5 (80/120Gbps), WiFi 7, Bluetooth 5.4, and optional 5G WWAN. The two laptops also feature a quick-access bottom cover for easy serviceability and repairability of key components like batteries, memory, and storage.

The Dell Pro Max 16/18 Plus laptops are expected to be officially unveiled along with pricing at CES on January 7, 2025, with a mid-2025 release window.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/IsbKH3r

Saturday, December 21, 2024

Latest Tech News

  • Shuttle XH610G2 offers compact design supporting Intel Core processors up to 24 cores
  • Exclusive heat pipe technology ensures reliable operation in demanding environments
  • Flexible storage options include M.2 slots and SATA interfaces

Shuttle has released its latest mini PC, aimed at meeting the diverse demands of modern commercial tasks.

With a small 5-liter chassis and a compact design measuring just 250mm x 200mm x 95mm, the Shuttle XH610G2 employs the Intel H610 chipset, making it compatible with a broad spectrum of Intel Core processors, from the latest 14th Gen models back to the 12th Gen series.

The company says the device is designed to handle applications that require significant computational power like image recognition, 3D video creation, and AI data processing.

Shuttle XH610G2

The Shuttle XH610G2 comes with an exclusive heat pipe cooling technology which allows the workstation to operate reliably even in demanding environments, being capable of withstanding temperatures from 0 to 50 degrees Celsius, making it suitable for continuous operation in various commercial settings.

The Shuttle XH610G2 can accommodate Intel Core models with up to 24 cores and a peak clock speed of 5.8GHz. This processing power allows the workstation to handle intensive tasks while staying within a 65W thermal design power (TDP) limit. The graphics are enhanced by the integrated Intel UHD graphics with Xe architecture, offering capabilities to manage demanding visual applications, from high-quality media playback to 4K triple-display setups. The inclusion of dual HDMI 2.0b ports and a DisplayPort output facilitates independent 4K display support.

The XH610G2 offers extensive customization and scalability with support for dual PCIe slots, one x16 and one x1, allowing users to install discrete graphics cards or other high-performance components like video capture cards.

For memory, the XH610G2 supports up to 64GB of DDR5-5600 SO-DIMM memory, split across two slots, making ideal for resource-intensive applications, providing the system with the necessary power to handle complex computational tasks efficiently. Running at a low 1.1V, this memory configuration also minimizes energy consumption, which can be a significant advantage in environments conscious of power usage.

In terms of storage, this device features a SATA 6.0Gb/s interface for a 2.5-inch SSD or HDD, along with two M.2 slots for NVMe and SATA storage options. Users are recommended to choose a SATA SSD over a traditional HDD to ensure faster performance.

The I/O options on the XH610G2 further enhance its flexibility, with four USB 3.2 Gen 1 ports, two Ethernet ports, one supporting 1GbE and another 2.5GbE, and an optional RS232 COM port offering enhanced compatibility for specialized peripheral connections, which can be particularly useful in industrial or legacy environments.

Furthermore, the compact chassis includes M.2 expansion slots for both WLAN and LTE adapters, providing options for wireless connectivity that can be critical in setups where wired connections are not feasible.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/YuyqwPS

Latest Tech News

  • TeamGroup claims CAMM2 memory promises high-speed DDR5 performance
  • Revolutionary design offers dual-channel operation in a single module
  • Limited motherboard compatibility poses challenges for CAMM2 adoption

TeamGroup has introduced its Compression Attached Memory Module 2 (CAMM2), promising high-speed DDR5 performance with its new T-Create lineup.

The company says CAMM2 features a revolutionary design that offers significant advantages over traditional memory types like SO-DIMM, U-DIMM, and R-DIMM. It supports dual-channel operation with just one module, streamlining system architecture and lowering power consumption.

The built-in Client Clock Driver (CKD) boosts signal integrity, making CAMM2 well-suited for slim notebooks while its optimized thermal design enhances heat dissipation, allowing higher performance despite the smaller form factor.

CAMM2-compatible motherboards are very scarce

The T-Create CAMM2 modules are designed with DDR5-7200 specifications and a CAS latency of CL34-42-42-84, delivering remarkable read, write, and copy speeds of up to 117GB/s, 108GB/s, and 106GB/s, respectively.

This performance is achieved through manual overclocking, which has driven latency down to 55ns, a significant reduction compared to typical DDR5 JEDEC specifications. TeamGroup is now focused on pushing boundaries and the company says it is working to achieve even faster speeds, aiming to reach DDR5-8000 and even DDR5-9000 in future iterations.

One major setback for TeamGroup lies in the availability of CAMM2-compatible motherboards, which are currently limited. The T-Create CAMM2 memory was tested on MSI’s Z790 Project Zero, one of the few boards currently compatible with this new form factor.

Other brands, such as Gigabyte, hint at possible CAMM2-enabled designs, like an upcoming TACHYON board. However, the CAMM2 ecosystem is still emerging, and widespread adoption may depend on the release of more compatible boards and competitive pricing.

Nevertheless, TeamGroup expects to launch the first-generation T-Create CAMM2 modules by Q1 2025, with broader motherboard support potentially arriving as manufacturers introduce new CPU platforms. With AMD and Intel rumoured to announce budget-friendly CPUs at CES 2025, the rollout of mid-range boards compatible with CAMM2 could align with TeamGroup’s release plans, potentially helping CAMM2 secure a foothold in the market.

CAMM2 offers a couple of advantages over the widely used SO-DIMM, UDIMM, and RDIMM standards. Notably, CAMM2 modules operate in dual-channel mode while only occupying a single physical slot. Furthermore, they incorporate a Client Clock Driver (CKD), similar to CUDIMM memory, which bolsters signal integrity at high speeds, allowing for more reliable and faster memory performance.

These features make CAMM2 particularly appealing for laptops, which often face limitations with current SO-DIMM speeds or non-upgradeable LPDDR5/5X options.

Via Tom's Hardware

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/wjS3NFW

Latest Tech News


  • We might not see the OnePlus Open 2 until later in 2025
  • Previous leaks predicted a Q1 2025 launch
  • Major upgrades have been rumored for the foldable

A quick browse through our OnePlus Open review will tell you why we're very much looking forward to the foldable phone's successor – though if a new leak is to be believed, the wait for the OnePlus Open 2 might be longer than originally thought.

According to tipster Sanju Choudhary (via GSMArena), the handset is going to break cover during the second half of next year – anytime from July onwards. That contradicts an earlier rumor that it would be unveiled in the first three months of 2025.

There's no indication whether or not OnePlus has changed its plans, or if the launch date was originally set for the first quarter of next year and has since been pushed back (engineering foldable phones is a tricky challenge, after all).

It's also fair to say that none of these rumors can be confirmed until OnePlus actually makes its announcement. The original OnePlus Open was launched in October 2023, which doesn't really tell us much about a schedule for its successor.

Upgrades on the way

Whenever the next OnePlus folding phone shows up, it sounds like it's going to be worth the wait – which has lasted 14 months and counting. Rumors have pointed to major upgrades in terms of the rear camera and the internal components.

We've also heard that the OnePlus Open 2 will have the biggest battery ever seen in a foldable, as well as being thinner and more waterproof than the handset it's replacing. That's a significant number of improvements to look forward to.

In our OnePlus Open review, we described the phone as "the only foldable phone that doesn't compromise", and there was particular praise for the design and the camera setup – so the upcoming upgrade has a lot to live up to.

Before we see another foldable from OnePlus, we'll see the OnePlus 13 and the OnePlus 13R made available worldwide: OnePlus has confirmed this is happening on January 7, so we could also get a teaser for the OnePlus Open 2 at the same time.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/bJIzVCo

Friday, December 20, 2024

Latest Tech News

  • 15% of Steam users' playtime dedicated to 2024 games
  • 47% of playtime on games up to eight years old
  • Many reasons for this, including more older games to play

Steam’s end-of-the-year review has always revealed some fascinating PC gaming trends and this year’s is no exception. According to 2024’s stats, only 15% of Steam users spent their total playing time on games that launched in 2024.

Looking further at the data that PC Gamer reports on, 47% of the total playing time on Steam was spent on games released in the last seven years, while 37% of that time was spent on games that launched eight years or more ago. Now the question is, why and what does this mean?

One possible explanation is that gamers could be focusing more on their backlogs rather than new releases. We do know that playtime for current releases is higher this year than in 2023, as there was an increase from 9% to 15%, which means players are buying new titles at least. There are other possibilities for this trend as well.

Other possibilities for this statistic

One reason could be that older games are easier to access due to their cheaper prices, especially due to the many Steam sales. There’s also the influence of the Steam Deck and what’s considered ‘Steam Deck playable,’ since many recent AAA games may be too demanding for a portable PC.

There’s also the fact that older live service games like Counter-Strike, Dota 2, and PUBG have made up Steam's Most Played charts, while newer titles have an incredibly difficult time breaking through and building a player base.

Another reason is that Steam has over 200,000 titles released over the course of decades, compared to the relatively paltry 18,000 games released in 2024 according to SteamDB. So naturally, more users will spend more time playing older games versus recent ones.

Regardless, 15% of playtime dedicated to new games is rather impressive, compared to 2022’s 17% stat. It means that the numbers are recovering after the massive dip in 2023. Hopefully next year we’ll see another increase, as gamers delve into more new titles.

You might also like...



from Latest from TechRadar US in News,opinion https://ift.tt/oy487SC

Latest Tech News


  • OpenAI announced upcoming o3 and o3-mini AI models.
  • The new models are enhanced "reasoning" AI models that build on the o1 and o1-mini models released this year.
  • Both models handily outperform existing AI models and will roll out in the next few months.

The final day of the 12 Days of OpenAI, brought back OpenAI CEO Sam Altman to show off a brand new set of AI models coming in the new year. The o3 and o3-mini models are enhanced versions of the relatively new o1 and o1-mini models. They're designed to think before they speak, reasoning out their answers. The mini version is smaller and aimed more at carrying out a limited set of specific tasks but with the same approach.

OpenAI is calling it a big step toward artificial general intelligence (AGI), which is a pretty bold claim for what is, in some ways, a mild improvement to an already powerful model. You might have noticed there's a number missing between the current o1 and the upcoming o3 model. According to Altman, that's because OpenAI wants to avoid any confusion with British telecom company O2.

So, what makes o3 special? Unlike regular AI models that spit out answers quickly, o3 takes a beat to reason things out. This “private chain of thought” lets the model fact-check itself before responding, which helps it avoid some of the classic AI pitfalls, like confidently spewing out wrong answers. This extra thinking time can make o3 slower, even if only a little bit, but the payoff is better accuracy, especially in areas like math, science, and coding.

One great aspect of the new models is that you can adjust that extra thinking time manually. If you’re in a hurry, you can set it to “low compute” for quick responses. But if you want top-notch reasoning, crank it up to “high compute” and give it a little more time to mull things over. In tests, o3 has easily outstripped its predecessor.

This is not quite AGI; o3 can't take over for humans in every way. It also does not reach OpenAI's definition of AGI, which describes models that outperform humans in the most economically valuable projects. Still, should OpenAI reach that goal, things get interesting for its partnership with Microsoft since that would end OpenAI's obligation to give Microsoft exclusive access to the most advanced AI models.

New year, new models

Right now, o3 and its mini counterpart aren’t available to everyone. OpenAI is giving safety researchers a sneak peek via Copilot Labs, and the rest of us can expect the o3-mini model to drop in late January, with the full o3 following soon after. It’s a careful, measured rollout, which makes sense given the kind of power and complexity we’re talking about here.

Still, o3 gives us a glimpse of where things are headed: AI that doesn’t just generate content but actually thinks through problems. Whether it gets us to AGI or not, it’s clear that smarter, reasoning-driven AI is the next frontier. For now, we’ll just have to wait and see if o3 lives up to the hype or if this last gift from OpenAI is just a disguised lump of coal.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/AtjHbLM

Latest Tech News


  • Asus' latest monitor releases come with a kit to mount a mini PC at the back
  • There's also a groove to place your smartphone, plus an integrated USB hub
  • Sadly it is not a 4K display, merely a full HD+ one

As Mini PCs are becoming increasingly powerful, offering a compact design and a wealth of ports, they offer a versatile solution for users who need a powerful setup but don’t necessarily have the workspace to dedicate to a traditional desktop PC.

Recognizing this trend, Asus has introduced two 24-inch monitors, the BE248CFN and BE248QF, which are designed to accommodate these miniature marvels. Each monitor includes a mounting kit to securely attach a mini PC at the back of the stand, positioned closer to the base for easier access.

The two monitors offer other practical features, including a groove at the base that you can use to stash a smartphone. There’s also an integrated USB hub for users managing multiple devices.

Not 4K, sadly

Asus BE248CFN screen with mini PC

(Image credit: Asus)

Both models offer ergonomic adjustments to suit various viewing preferences. The stands support tilt from -5 to 35 degrees, swivel 180 degrees left and right, pivot 90 degrees in either direction, and 130mm of height adjustment. The IPS panels deliver wide 178-degree viewing angles and 16.7 million colors, with a 5ms response time, 350cd/m² brightness, and a contrast ratio of 3,000:1.

Rather disappointingly, the display resolution of the two screens is Full HD+ (1,920 x 1,200), rather than 4K upwards, which may limit their appeal to those requiring higher detail or sharper visuals, such as content creators, or those who like to have a lot of windows open on screen at the same time.

Connectivity varies slightly between the two models. The BE248CFN includes HDMI 1.4, DisplayPort 1.4, USB Type-C with a 96W power delivery function, a four-port USB 3.2 Gen 1 hub, and Gigabit Ethernet. The BE248QF adds a mini D-Sub 15-pin connector, catering to users with legacy hardware.

Both monitors incorporate 2W stereo speakers and Asus Eye Care technologies, such as Flicker-Free and Low Blue Light, which should make them comfortable to use during extended work sessions.

There’s no word on pricing or global availability as yet, but they should be on sale soon, starting in Japan, before hopefully heading to other countries.

You might also like




from Latest from TechRadar US in News,opinion https://ift.tt/AIZ1pNj

Thursday, December 19, 2024

I Set Up My Own ADT Home Security System. Here's How It Works

Commentary: I didn't need a technician to come to my home to set up ADT's smart security system. Here's what it includes and how I did my own DIY installation.

from CNET https://ift.tt/VB5lSek

Latest Tech News


  • Apple developing "Baltra" server chip for AI, targeting 2026 production
  • Israeli silicon team leading project; Mac chip canceled for focus
  • Broadcom collaboration and TSMC’s N3P tech to enhance development

Apple is reportedly developing its first server chip tailored specifically for artificial intelligence.

A paywalled report by Wayne Ma and Qianer Liu in The Information claims the project, codenamed “Baltra,” aims to address the growing computational demands of AI-driven features and is expected to enter mass production by 2026.

Apple’s silicon design team in Israel, which was responsible for designing the processors that replaced Intel chips in Macs in 2020, is now leading the development of the AI processor, according to sources. To support this effort, Apple has reportedly canceled the development of a high-performance Mac chip made up of four smaller chips stitched together.

Central to Apple’s efforts

The report notes this decision, made over the summer, is intended to free up engineers in Israel to focus on Baltra, signaling Apple’s shift in priorities toward AI hardware.

Apple is working with semiconductor giant Broadcom on this project, using the company’s advanced networking technologies needed for AI processing. While Apple usually designs its chips in-house, Broadcom’s role is expected to focus on networking solutions, marking a new direction in their partnership.

To make the AI chip, The Information says Apple plans to use TSMC’s advanced N3P process, an upgrade from the technology behind its latest processors, like the M4. This move highlights Apple’s focus on enhancing performance and efficiency in its chip designs.

The Baltra chip is expected to drive Apple’s efforts to integrate AI more deeply into its ecosystem. By leveraging Broadcom’s networking expertise and TSMC's advanced manufacturing techniques, Apple appears determined to catch up to rivals in the AI space and establish a stronger presence in the industry.

In November 2024, we reported that Apple approached its long-time manufacturing partner Foxconn to build AI servers in Taiwan. These servers, using Apple’s M-series chips, are intended to support Apple Intelligence features in iPhones, iPads, and MacBooks.

You might also like



from TechRadar - All the latest technology news https://ift.tt/7KHpRVC

Wednesday, December 18, 2024

Sony’s WF-1000XM5 Wireless Earbuds Make a Great Gift at This Record-Low Price

The Sony WF-1000XM5 wireless earbuds offer superb sound quality and you can now snag them at Amazon for $198, their lowest price ever.

from CNET https://ift.tt/CFWAean

Latest Tech News


  • Huawei may be adding HBM support to Kunpeng SoC
  • Clues hint at a replacement for the Kunpeng 920, launched in 2019
  • New SoC with HBM may target HPC, server market rivals

Huawei engineers have reportedly released new Linux patches to enable driver support for High Bandwidth Memory (HBM) management on the company’s ARM-based Kunpeng high-performance SoC.

The Kunpeng 920, which debuted in January 2019 as the company’s first server CPU, is a 7nm processor featuring up to 64 cores based on the Armv8.2 architecture. It supports eight DDR4 memory channels and has a thermal design power (TDP) of up to 180W. While these specifications were competitive when first introduced, things have moved on significantly since.

Introducing a new Kunpeng SoC with integrated HBM would align with industry trends as companies seek to boost memory bandwidth and performance in response to increasingly demanding workloads. It could also signal Huawei’s efforts to maintain competitiveness in the HPC and server markets dominated by Intel Xeon and AMD EPYC.

No official announcement... yet

Phoronix’s Michael Larabel notes that Huawei has not yet formally announced a new Kunpeng SoC (with or without HBM), and references to it are sparse. Kernel patches, however, have previously indicated work on integrating HBM into the platform.

The latest patches specifically address power control for HBM devices on the Kunpeng SoC, introducing the ability to power on or off HBM caches depending on workload requirements.

The patch series includes detailed descriptions of this functionality. Huawei explains that HBM offers higher bandwidth but consumes more power. The proposed drivers will allow users to manage HBM power consumption, optimizing energy use for workloads that do not require high memory bandwidth.

The patches also introduce a driver for HBM cache, enabling user-space control over this feature. By using HBM as a cache, operating systems can leverage its bandwidth benefits without needing direct awareness of the cache’s presence. When workloads are less demanding, the cache can be powered down to save energy.

While we don't have any concrete details on future Kunpeng SoCs, integrating HBM could potentially allow them compete more effectively against other ARM-based server processors, as well as Intel’s latest Xeon and AMD EPYC offerings.

You might also like



from TechRadar - All the latest technology news https://ift.tt/fexBqHY

Heat Domes and Surging Grid Demand Threaten US Power Grids with Blackouts

A new report shows a sharp increase in peak electricity demand, leading to blackout concerns in multiple states. Here's how experts say ...