Showing posts with label TechRadar - All the latest technology news. Show all posts
Showing posts with label TechRadar - All the latest technology news. Show all posts

Thursday, December 19, 2024

Latest Tech News


  • Apple developing "Baltra" server chip for AI, targeting 2026 production
  • Israeli silicon team leading project; Mac chip canceled for focus
  • Broadcom collaboration and TSMC’s N3P tech to enhance development

Apple is reportedly developing its first server chip tailored specifically for artificial intelligence.

A paywalled report by Wayne Ma and Qianer Liu in The Information claims the project, codenamed “Baltra,” aims to address the growing computational demands of AI-driven features and is expected to enter mass production by 2026.

Apple’s silicon design team in Israel, which was responsible for designing the processors that replaced Intel chips in Macs in 2020, is now leading the development of the AI processor, according to sources. To support this effort, Apple has reportedly canceled the development of a high-performance Mac chip made up of four smaller chips stitched together.

Central to Apple’s efforts

The report notes this decision, made over the summer, is intended to free up engineers in Israel to focus on Baltra, signaling Apple’s shift in priorities toward AI hardware.

Apple is working with semiconductor giant Broadcom on this project, using the company’s advanced networking technologies needed for AI processing. While Apple usually designs its chips in-house, Broadcom’s role is expected to focus on networking solutions, marking a new direction in their partnership.

To make the AI chip, The Information says Apple plans to use TSMC’s advanced N3P process, an upgrade from the technology behind its latest processors, like the M4. This move highlights Apple’s focus on enhancing performance and efficiency in its chip designs.

The Baltra chip is expected to drive Apple’s efforts to integrate AI more deeply into its ecosystem. By leveraging Broadcom’s networking expertise and TSMC's advanced manufacturing techniques, Apple appears determined to catch up to rivals in the AI space and establish a stronger presence in the industry.

In November 2024, we reported that Apple approached its long-time manufacturing partner Foxconn to build AI servers in Taiwan. These servers, using Apple’s M-series chips, are intended to support Apple Intelligence features in iPhones, iPads, and MacBooks.

You might also like



from TechRadar - All the latest technology news https://ift.tt/7KHpRVC

Wednesday, December 18, 2024

Latest Tech News


  • Huawei may be adding HBM support to Kunpeng SoC
  • Clues hint at a replacement for the Kunpeng 920, launched in 2019
  • New SoC with HBM may target HPC, server market rivals

Huawei engineers have reportedly released new Linux patches to enable driver support for High Bandwidth Memory (HBM) management on the company’s ARM-based Kunpeng high-performance SoC.

The Kunpeng 920, which debuted in January 2019 as the company’s first server CPU, is a 7nm processor featuring up to 64 cores based on the Armv8.2 architecture. It supports eight DDR4 memory channels and has a thermal design power (TDP) of up to 180W. While these specifications were competitive when first introduced, things have moved on significantly since.

Introducing a new Kunpeng SoC with integrated HBM would align with industry trends as companies seek to boost memory bandwidth and performance in response to increasingly demanding workloads. It could also signal Huawei’s efforts to maintain competitiveness in the HPC and server markets dominated by Intel Xeon and AMD EPYC.

No official announcement... yet

Phoronix’s Michael Larabel notes that Huawei has not yet formally announced a new Kunpeng SoC (with or without HBM), and references to it are sparse. Kernel patches, however, have previously indicated work on integrating HBM into the platform.

The latest patches specifically address power control for HBM devices on the Kunpeng SoC, introducing the ability to power on or off HBM caches depending on workload requirements.

The patch series includes detailed descriptions of this functionality. Huawei explains that HBM offers higher bandwidth but consumes more power. The proposed drivers will allow users to manage HBM power consumption, optimizing energy use for workloads that do not require high memory bandwidth.

The patches also introduce a driver for HBM cache, enabling user-space control over this feature. By using HBM as a cache, operating systems can leverage its bandwidth benefits without needing direct awareness of the cache’s presence. When workloads are less demanding, the cache can be powered down to save energy.

While we don't have any concrete details on future Kunpeng SoCs, integrating HBM could potentially allow them compete more effectively against other ARM-based server processors, as well as Intel’s latest Xeon and AMD EPYC offerings.

You might also like



from TechRadar - All the latest technology news https://ift.tt/fexBqHY

Tuesday, December 17, 2024

Latest Tech News


  • Slim-Llama reduces power needs using binary/ternary quantization
  • Achieves 4.59x efficiency boost, consuming 4.69–82.07mW at scale
  • Supports 3B-parameter models with 489ms latency, enabling efficiency

Traditional large language models (LLMs) often suffer from excessive power demands due to frequent external memory access - however researchers at the Korea Advanced Institute of Science and Technology (KAIST), have now developed Slim-Llama, an ASIC designed to address this issue through clever quantization and data management.

Slim-Llama employs binary/ternary quantization which reduces the precision of model weights to just 1 or 2 bits, significantly lowering the computational and memory requirements.

To further improve efficiency, it integrates a Sparsity-aware Look-up Table, improving sparse data handling and reducing unnecessary computations. The design also incorporates an output reuse scheme and index vector reordering, minimizing redundant operations and improving data flow efficiency.

Reduced dependency on external memory

According to the team, the technology demonstrates a 4.59x improvement in benchmark energy efficiency compared to previous state-of-the-art solutions.

Slim-Llama achieves system power consumption as low as 4.69mW at 25MHz and scales to 82.07mW at 200MHz, maintaining impressive energy efficiency even at higher frequencies. It is capable of delivering peak performance of up to 4.92 TOPS at 1.31 TOPS/W, further showcasing its efficiency.

The chip features a total die area of 20.25mm², utilizing Samsung’s 28nm CMOS technology. With 500KB of on-chip SRAM, Slim-Llama reduces dependency on external memory, significantly cutting energy costs associated with data movement. The system supports external bandwidth of 1.6GB/s at 200MHz, promising smooth data handling.

Slim-Llama supports models like Llama 1bit and Llama 1.5bit, with up to 3 billion parameters, and KAIST says it delivers benchmark performance that meets the demands of modern AI applications. With a latency of 489ms for the Llama 1bit model, Slim-Llama demonstrates both efficiency and performance, and making it the first ASIC to run billion-parameter models with such low power consumption.

Although it's early days, this breakthrough in energy-efficient computing could potentially pave the way for more sustainable and accessible AI hardware solutions, catering to the growing demand for efficient LLM deployment. The KAIST team is set to reveal more about Slim-Llama at the 2025 IEEE International Solid-State Circuits Conference in San Francisco on Wednesday, February 19.

You might also like



from TechRadar - All the latest technology news https://ift.tt/2UdjqZW

Monday, December 16, 2024

Latest Tech News


  • Polysoft offers SSD upgrades for Mac Studio at significantly lower prices
  • StudioDrive features overvoltage protection and durable components
  • Offered in 2TB, 4TB, and 8TB capacities, shipping next year

Apple introduced the Mac Studio in 2022 with the M1 chip, followed by the M2 model in 2023, and although these compact powerhouses have been lauded for their performance, buyers have rightly expressed concerns about the limited base SSD configurations and the absence of post-purchase upgrade options.

External USB-C or Thunderbolt SSDs are a common workaround for users seeking additional storage, but they don't match the speed and convenience of internal storage solutions.

Stepping in to address this gap, French company Polysoft has created the first publicly available SSD upgrade solution for Apple Silicon devices. Offered at a fraction of Apple’s prices, these SSD modules are the result of an extensive reverse-engineering process.

Better than Apple

Unlike SSDs used in PCs, Apple’s storage modules are challenging to replicate due to their integration with the M1 and M2 chips, where the storage controller resides.

Polysoft’s efforts included detailed disassembly, component analysis, and redesign, culminating in the StudioDrive SSD which is set to launch next year following a successful Kickstarter campaign.

Polysoft claims its SSDs not only replicate Apple’s modules but also improve on them.

A key difference is the inclusion of "RIROP" (Rossmann Is Right Overvoltage Protection), a safeguard inspired by Louis Rossmann’s work on hardware reliability. This feature reportedly protects against voltage surges, reducing the risk of catastrophic data loss due to hardware failure.

The StudioDrive product line supports both M1 and M2 Mac Studio models. It includes blank boards for enthusiasts and pre-configured options in 2TB, 4TB, and 8TB capacities. Polysoft says that the modules use high-quality Kioxia and Hynix TLC NANDs, offering performance and durability comparable to Apple’s original storage solutions. The drives are backed by a five-year warranty and have a lifespan of up to 14,000 TBW.

Pricing starts at €399 ($419) for 2TB, €799 ($839) for 4TB, and €1,099 ($1,155) for 8TB. While these upgrades will no doubt be viewed as an affordable, and welcome solution by many Mac Studio owners, users should be aware that installing third-party storage will void Apple’s warranty.

You might also like



from TechRadar - All the latest technology news https://ift.tt/aKyTHjP

Sunday, December 15, 2024

Latest Tech News


  • Minisforum's customizable MS-A1 has AM5 socket for Ryzen CPUs
  • Compact design with up to 16TB storage, includes OCuLink port
  • Wi-Fi 6E, USB4, and advanced cooling for high performance

The Minisforum MS-A1 is the latest addition to the company's line of powerful mini PCs, and is the spiritual successor to the MS-01 model.

Unlike its predecessor, the MS-A1 introduces the option of swapping CPUs, utilizing an AM5 socket to take various AMD Ryzen processors, including AMD's 7000 series, 8000 PHX architecture (8700G/8600G), and potentially the AMD 9000 series following a BIOS update. It supports up to AMD 8700G APU for graphics.

The Minisforum MS-A1 is available as a barebone system (without a CPU or OS) starting at $259 or as a pre-configured model. At the moment, there’s an offer to save $20, bringing the barebone price down to $239. You can add the Minisforum Deg1 OCuLink graphics docking station when purchasing the workstation for an additional $99, which allows the system to drive up to four 8K screens simultaneously.

Staying cool

The mini PC supports up to 16TB of storage via four SSDs using PCIe 4.0 M.2 slots. There are five USB Type-A ports, a USB4 port capable of 40Gbps, the OCuLink interface, and dual Ethernet RJ45 ports supporting up to 2.5Gbps each.

For display outputs, the device includes HDMI 2.1 and DisplayPort 2.0 connections, with the USB4 interface also supporting screen output. Without an eGPU, it can still drive three 8K displays. For wireless connectivity, the Minisforum MS-A1 offers WiFi 6E and Bluetooth 5.2.

The mini PC's housing is compact and constructed from a mix of metal and plastic. The Cold Wave cooling system, featuring dual fans and quad heat pipes, prevents overheating even when under load.

With customizable CPU options and affordable eGPU support, the Minisforum MS-A1 offers a flexible, mini PC solution that is ideal for users seeking a compact yet powerful workstation for content creation, multitasking, gaming, or general productivity.

You might also like



from TechRadar - All the latest technology news https://ift.tt/iH7WC2M

Saturday, December 14, 2024

Latest Tech News

The GPD Pocket 4 is an 8.8-inch laptop weighing just 770g, that is designed to combine portability with powerful performance.

GPD likens its aesthetic to that of an Apple MacBook, highlighting its sleek, lightweight build, which is small enough to carry like a mobile phone.

The Pocket 4 is powered by an AMD Ryzen AI 9 HX 370 processor with Radeon 890M/880M graphics (there’s also the option for an Ryzen 7 8840U CPU with 8840U graphics). It features a high-resolution 2.5K LTPS display with a 144Hz refresh rate and 10-point touch functionality. Its proprietary T-shaped hinge allows the screen to rotate up to 180 degrees, enabling it to be used as a tablet.

Choose your own ports

The Pocket 4 comes with up to 64GB of high-speed LPDDR5x memory and up to 2TB of PCIe Gen4 NVMe SSD storage. It sports a full-function USB-C port, USB4, HDMI 2.1, and an RJ45 network port. Wireless connectivity comes in the form of Wi-Fi 6E and Bluetooth 5.3. The device includes a 5MP front-facing camera, a QWERTY backlit keyboard, and a 45Wh battery supporting 100W PD fast charging.

Pricing for the GPD Pocket 4 starts at $829 for the model with the 8840U CPU, 16GB of RAM, and 1TB of storage.

The top-tier configuration with the HX 370 CPU, 64GB of RAM, and 2TB of storage is priced at $1,335. The Pocket 4 also supports a range of additional modules, allowing you to customize it to your needs. An RS232 port is available for $14, a single-port KVM for $48, and a 4G LTE expansion module for $110. There's also a microSD card reader with UHS-I support.

Earlier in 2024, GPD introduced the Duo, a $2,000 laptop featuring the world’s fastest mobile CPU, an OCuLink connector, and dual 13.3-inch OLED displays that are able to mirror, extend, or function independently.

That product marked a departure from GPD's usual lineup of compact gaming laptops and handheld consoles, but the company is returning to its roots with its latest creation.

The Pocket 4 is currently crowdfunding on Indiegogo, and while it offers an impressive array of features and modular options, potential backers, as always, should be aware of the risks associated with crowdfunding. Delays, changes to specifications, or project cancellations are possible, although GPD does have a proven track record of delivering backed products.

You might also like



from TechRadar - All the latest technology news https://ift.tt/mXnclwz

Friday, December 13, 2024

Latest Tech News


  • Qualcomm talks 6G innovations beyond speed, integrating AI and IoT
  • 6G promises enhanced coverage, and efficiency
  • AI-native design will optimize networks and enable new use cases

The transition from 5G to 6G is set to redefine the wireless landscape, offering advancements that go far beyond speed and connectivity.

Qualcomm, a key player in wireless innovation, is building on its 5G legacy to explore the possibilities of 6G, which is expected to integrate artificial intelligence, advanced IoT applications, and seamless connectivity between terrestrial and non-terrestrial networks.

Targeted for deployment in the 2030s, 6G promises to unlock new opportunities across industries and address the growing demands of an increasingly connected world.

In an exclusive interview with TechRadar Pro, John Smee, Global Head of Wireless Research at Qualcomm, discussed the future of 6G, outlining how the company is looking to build upon the advancements of 5G and 5G Advanced.

He also highlighted Qualcomm's role in contributing to the research and development of the technology, explaining that 6G will not only enhance key performance indicators like coverage, capacity, and efficiency but also enable transformative use cases such as digital twins and edge computing.

What are the key technological advancements in 5G that are paving the way for 6G development?

There are quite a few key advancements in 5G and 5G Advanced that are paving the way for 6G. Here are just a few examples:

  • Air interface foundation: we believe 6G will build on the OFDM foundation, with a focus on improving coverage, spectral efficiency, and capacity in both legacy FDD and TDD bands as well as new spectrum.
  • MIMO/duplex evolution: 6G Giga-MIMO will enable new upper midband spectrum (6-15 GHz) delivering additional wide-area capacity and reusing the existing 3.5 GHz macro cell sites and backhaul. Evolution to full duplex can deliver better coverage and flexibility to meet growing data demand.
  • Wireless AI: 5G Advanced kickstarted the era of AI in wireless, improving network/device performance and efficiency. AI will be an integral part of the 6G system design, with AI-native protocols across multiple layers of the stack.
  • Wireless sensing: the 5G-Advanced study of integrate sensing and communication (ISAC) can complement positioning to make the wireless network more efficient and open new business opportunities for the ecosystem.
  • Integrated TN/NTN: 5G introduced 5G non-terrestrial networking (NTN) by enabling satellites to deliver global coverage leveraging the cellular standard and modem implementations. 6G is expected to build on this foundation to support a seamless interworking of terrestrial networks and NTN.

How do you see the transition from 5G to 6G impacting businesses, and are there specific industries that will benefit the most?

The transition from 5G to 6G is expected to significantly enhance wireless connectivity improving fundamental KPI’s for coverage, capacity, and performance while enabling new services like AI, sensing, and digital twins. 6G will be designed to meet the increasing data transfer needs of connected AI-powered devices. Targeting 2030 deployment, 6G can efficiently enable intelligent computing everywhere creating new opportunities for value creation at the edge. Industries such as healthcare, manufacturing, transportation, and education will continue their transformations to leverage connected AI and the enhanced capabilities of 6G.

Can you explain the role of AI, and specifically Generative AI, in enhancing 5G networks and its potential impact on 6G?

AI is poised to significantly enhance 5G and 6G system performance, operational efficiency, and user experiences, as well as unlock new use cases at scale. For instance, by leveraging AI for network optimization, predictive analytics, and automated configuration, these networks can achieve greater efficiency, reliability, and security. Generative AI can simulate various network scenarios and create synthetic data to train machine learning models, ensuring robust network performance even in complex environments. These technologies enable advanced applications like real-time edge computing, personalized services, and seamless integration with a wide range of devices. Generative AI will also often be implemented on the device and as applications expand this will increase the 5G and 6G communications data demand on uplink and downlink.

AI native is intended to make the system perform better by either replacing functional blocks with AI implementations, or allowing AI to better manage the protocol, network node, device, etc. so that it can adapt more flexibly to support a larger variety of enterprise and consumer experiences. The AI native paradigm can give more implementation flexibility and bring more innovation and differentiation to the devices and networks.

AI native can be in at least the two following forms:

  • Replacing existing functionality with AI – e.g., there are a number of use cases in 3GPP (beam management, CSI feedback, positioning, mobility) that are being studied to see if there is a better solution with AI. One aspect of AI native is to include more of these features across layers, protocols and network/device for improved performance with AI. Especially relevant is work in 3GPP and ORAN to improve network automation with AI and the associated use cases. Cross node AI is also a potential example of this where the function is replaced by AI at the network and device.
  • Enable AI as a part of the protocol behavior – to change the actual protocols to be defined to be AI friendly so that the protocol can adapt to the combination of radio, device and application state to determine how best to serve the traffic. This changes how the function operates to incorporate AI.

What are the expected benefits of 6G over 5G in terms of speed, latency, and connectivity?

6G will not just be designed to achieve higher speed and lower latency, but it will also focus on bringing significant efficiency enhancements to capacity, coverage, energy consumption, and deployment cost. Additionally, 6G will focus on enabling faster deployment of new services and growing the surface area of operator opportunities. The focus will be driven by use cases to create new value for the broader wireless ecosystem and society.

How will 6G technology influence the development of IoT and generative AI technologies?

6G will bring an integrated design for eMBB and IoT with shared objectives of enhanced connectivity, extended coverage, added functionalities such as positioning and sensing that allow the devices to interact more effectively with their environment, and add more use cases of IoT. Ambient IoT, which will operate without batteries using energy harvesting techniques, will help proliferate low cost IoT sensors and further integrate the physical and digital worlds. Networks and devices will support real-time AI processing and decision-making at the edge, creating value for IoT applications independent of centralized cloud systems.

How is Qualcomm contributing to the research and development of 6G technology?

Qualcomm has a storied heritage in wireless technology, including groundbreaking innovations in 5G technologies. We are building on a strong foundation to advance connectivity across all technologies including 5G Advanced, 6G, Wi-Fi, Bluetooth and more. We are leading the ecosystem in technology research and development, working closely with industry technology leaders such as mobile operators, OEMs, and academia to bring future innovations to life.

How do you envision the future of mobile communication evolving with the advent of 6G?

The future of mobile communication with the advent of 6G is envisioned as a continuum that builds upon the advancements of 5G, focusing on integrating AI into networks and devices. 6G aims to enhance the efficiency and economics of existing and new use cases in the 2030s, such as multi-device plans, fixed wireless services, AR glasses, self-driving cars and elderly-care service robots. The evolution will also involve integrated sensing and communication, enabling new solutions like digital twins and RF sensing. Additionally, 6G will leverage existing infrastructure to provide cost-effective upgrades in existing spectrum on uplink performance and edge data processing, as well as add significant capacity in new spectrum.

You may also like



from TechRadar - All the latest technology news https://ift.tt/o2f87C0

Thursday, December 12, 2024

Latest Tech News


  • Valve's refurbished Steam Deck OLED is now more affordable than a new LCD model
  • It's now 20% cheaper than brand new OLED models
  • Stock is likely low in the US, already out of stock in the UK

Valve's Steam Deck OLED models have been competing with the likes of the Asus ROG Ally and the Lenovo Legion Go - and now, the competition grows with refurbished OLED models much cheaper than new options, and almost as cheap as the one remaining LCD model.

According to Tom's Hardware, the Steam Deck OLED is now 20% cheaper than the new 512GB and 1TB models, with the former priced at $439 and the latter at $519. This is via Valve's certified refurbished program, with devices that are fully tested and functional along with the one-year warranty you would get with a new model (more depending on your region).

While the Steam Deck OLED isn't nearly as powerful as the ROG Ally or Lenovo Legion Go (both use the Z1 Extreme APU), it stands as a competent device across multiple games - it has a 90Hz display with added HDR support and the possibility of 1000 nits of peak brightness, compared to the LCD's 600 nits.

While these refurbished options are available in the UK and the US, all options (including LCD) are currently out of stock in the UK. If you're in the US, the only refurbished models available are the two OLED 512GB and 1TB options - based on how fast this went out of stock in the UK, you might want to act now before it's too late.

A woman playing Hollow Knight on a Steam Deck

(Image credit: Valve)

What does this mean for competition with other handheld gaming PCs?

As I've previously mentioned, the Steam Deck LCD and OLED don't really come close to providing the same level of performance as other more recent handheld gaming PCs - both the Asus ROG Ally and Lenovo Legion Go outperform the device at a higher 1080p resolution, and this is even more of a case with the Ally X.

If you aren't too bothered about attaining high performance on a handheld, and you're more concerned about display quality, then the Steam Deck OLED or the Lenovo Legion Go are the two devices to consider. However, with this price drop of the refurbished OLED models, I would more than likely opt for Valve's popular gaming system over Lenovo's expensive Legion Go at MSRP.

There's no word on whether a successor to the Steam Deck is happening, but if it ever does, I hope the processor used will be able to match or at least come close to what Asus, Lenovo, and MSI's offerings.

You might also like...



from TechRadar - All the latest technology news https://ift.tt/L0fxen8

Latest Tech News


  • Blueshift’s BlueFive RISC-V processor addresses Memory and Energy Walls
  • BlueFive claims faster calculations, lower energy use via data optimization
  • Validated design integrates memory controller, CPU for better efficiency

Blueshift Memory has introduced a new RISC-V processor reference design intended to tackle twin computing challenges: the Memory Wall, caused by slower memory access compared to processors, and the Energy Wall, driven by the rising energy costs of data movement.

The UK-based firm claims its BlueFive processor can deliver 5 to 50 times faster calculation speeds, depending on the application and programming language, while reducing energy consumption by 50% to 65% through minimized data movement.

Blueshift’s processor is built on an open-source RISC-V core from the OpenHW Group and integrates its proprietary Yonder smart cache and BlueBlaze intelligent memory controller. The company says this combination eliminates memory-to-CPU latency, accelerates calculations, and reduces energy usage.

Validated design

“The hardware was initially created under our successful Innovation UK Smart grant project, and it has since been refined as a reference design for a standalone processor,” said Peter Marosan, founder and CTO of Blueshift Memory. “We are creating the software environment for this CPU with TensorFlow, Redis and C/C++ libraries, which will also make it accessible for Python.”

Blueshift says that the design has been validated in FPGA using the STREAM benchmark and tested with real-world applications, including computer vision AI and the Redis in-memory database.

The company says its non-Von Neumann architecture performs best when integrated into both memory and CPU.

“Our design is already validated on hardware, unlike other CPU solutions that aim to accelerate calculation, or offer only simulated results. It specifically addresses the Memory Wall - the fundamental problem that memory technology has fallen behind processor advances, and is holding back progress,” said Helen Duncan, CEO of Blueshift.

“We are already working with a commercial partner who will be a channel for our RISC-V solution. We are additionally making this reference design available for other customers to use, to create their own high-efficiency CPU designs.”

“We are collaborating with a manufacturer in SE Asia as well, to create a Blueshift Memory-enabled high bandwidth memory chip, and we will make a further announcement about this very soon,” Marosan added.

You might also like



from TechRadar - All the latest technology news https://ift.tt/WnRtwOQ

Wednesday, December 11, 2024

Latest Tech News


  • Asustor Flashstor Gen 2 supports up to 12 PCIe 4.0 NVMe SSDs
  • Powered by AMD Ryzen CPU, expandable DDR5 memory to 64GB
  • Dual 10GbE ports, USB4, ideal for demanding storage tasks

Asustor, a subsidiary of Asus, has launched its second generation Flashstor NAS series, offering high-performance, SSD-focused storage.

The line is made up of the Flashstor 6 Gen 2 (AS6806X) and Flashstor 12 Pro Gen 2 (FS6812X), which support up to six and twelve M.2 NVMe SSDs, respectively, with compatibility for PCIe 4.0 x4 to deliver ultra-fast data transfer speeds.

Both models are powered by an AMD Quad-Core 6nm Ryzen Embedded V3C14 processor, an upgrade from the previous generation’s Intel Celeron N5105 CPUs (check out our review of the Flashstor 12 Pro FS6712X from 2023 here).

Not cheap

The Flashstor 12 Pro Gen 2 comes with 16GB of DDR5-4800 ECC memory, expandable up to 64GB, while the Flashstor 6 Gen 2 includes 8GB of memory, also expandable. The devices are well-suited for resource-intensive tasks such as 4K video editing and content creation.

The Flashstor 12 Pro Gen 2 offers dual 10-Gigabit Ethernet ports, whereas the Flashstor 6 Gen 2 is equipped with a single 10-Gigabit Ethernet port. Both models support SMB Multichannel, allowing for faster-than-standard 10GbE data transfers. The devices feature two USB 4.0 (Type-C) ports and three USB 3.2 Gen 2 (Type-A) ports, providing high-speed external connections and compatibility with Thunderbolt 3/4 devices.

Equipped with advanced cooling systems, both models use silent fans to maintain optimal performance under heavy workloads while keeping noise levels low.

The NAS supports a range of applications, including VPN servers, media servers, mail servers, and cloud backups. It also accommodates up to 4,096 users across 512 groups, so it’s a good choice for teams needing simultaneous file access.

The Flashstor Gen 2 series features a compact design reminiscent of a PlayStation 4, but it doesn’t sharing its pricing with the beloved console. On Amazon, the Flashstor 6 Gen 2 is listed at $999, while the 12-bay Flashstor 12 Pro Gen 2 is priced at $1,399.

Getting the most from the 12-bay model and outfitting it with a dozen 8TB SSDs will ramp up the cost significantly, potentially exceeding $8,000. This high-end setup is clearly aimed at professionals and enthusiasts who require cutting-edge storage capabilities and are willing to pay for it.

You might also like



from TechRadar - All the latest technology news https://ift.tt/CDmfbTX

Tuesday, December 10, 2024

Latest Tech News


  • 10th year of the awards sees a record 9,000+ entries
  • 45 shortlisted finalists on display in Gallery@Oxo in London, 11-15 December
  • Italian photographer Milko Marchetti scoops top award

The Nikon Comedy Wildlife Awards 2024 has unveiled this year's winners, selected from over 9,000 images of animals captured in a variety of entertaining situations and expressions – the most in the contest's 10-year history. Italian photographer Milko Marchetti scooped the overall winner award with their perfectly-timed image of a red squirrel seemingly stuck in a tree (see below).

Milko wins a safari trip to the Maasai Mara game reserve in Kenya, while Nikon's Young photographer category winner, Kingston Tam, walks away with a Nikon Z8 mirrorless camera and 24-120mm zoom lens for their closeup image of a frog.

There were 45 shortlisted finalists in all, and TechRadar got to see the images at the awards evening in London. The exhibition runs from 11-15 December in the Gallery@Oxo in London, and entry is free.

If you're not in or visiting the UK, all of 2024's finalists can be seen on the Nikon Comedy Wildlife Awards website, and you'll find the category award winners below – try not to smile!

Overall winner – Milko Marchetti

Comedy Wildlife Awards 2024 category winner

Stuck squirrel, Milko Marchetti. (Image credit: © Milko Marchetti)

Milko's photograph of a red squirrel was taken in 2022 in the Podere Pantaleone park in Bagnacavallo, Ravenna, Italy. Milko uses a hide during the months that the park is closed to the public – his access is granted in exchange for his photos. Sitings of such squirrels are generally rare in Italy, but in the park they are more confident.

Milko says, “Nature photography has been my passion, ever since I was a boy, and I’ve always put all my free time and energy into it. I think that nature offers so much beauty and variety, and with a camera, the photographer has this ability, this superpower to freeze a moment and make it last forever in the form of a photograph. The emotion I experience at the moment when I click the camera button is pure adrenaline, and my hope is always to be able to convey at least one natural emotion through my photography. It seems it really worked this time!”

Category winners

Nikon young photographer category winner – Kingston Tam

Closeup of a frog (Cyclorana novaehollandiae)

Awkward smiley frog, Kingston Tam (Image credit: © Kingston Tam)

People's choice category winner – Tapani Linnanmäki

White-tailed eagle ruffling its feathers

Shake ruffle rattle and roll, Tapani Linnanmäki (Image credit: © Tapani Linnanmaki)

Insect category winner – Jose Miguel Gallego Molina

Mantis mediterranea (Iris oratoria) on the ground, front legs in the air

Mantis Flamenca, Jose Miguel Gallego Molina (Image credit: © Jose Miguel Gallego Molina)

Reptile category winner – Eberhard Ehmke

Frog floating on water with its head in a bubble

Frog in a balloon, Eberhard Ehmke (Image credit: © Eberhard Ehmke)

Bird category winner – Damyan Petkov

Whiskered Tern bird hits rock head on when trying to land

Whiskered Tern crash on landing, Damyan Petkov (Image credit: © Damyan Petkov)

Fish & other aquatic animal category winner – Przemyslaw Jakubczyk

Comedy Wildlife Awards 2024 category winner

Unexpected role swap, Przemyslaw Jakubczyk (Image credit: © Przemyslaw Jakubczyk)

Nikon junior photographer category winner – Sarthak Ranganadhan

Comedy Wildlife Awards 2024 category winner

Smooching owlets, Sarthak Ranganadhan (Image credit: © Sarthak Ranganadhan)

You might also like



from TechRadar - All the latest technology news https://ift.tt/jM1r2P4

Latest Tech News


  • Blumind debuts ultra-efficient analog AI chip, achieving 10 nJ/inference
  • Targeting wearables, healthcare, automotive, and always-on AI
  • Scaling for larger models, aiming for 1000 TOPS/W performance

Blumind, an analog AI chip startup, has showcased a chip designed for low-power applications achieving an impressive 10 nJ per inference, setting the stage for the company’s ambition to scale analog computing to new heights.

The company showed off its test silicon for ultra-efficient keyword spotting chip at Electronica 2024, where co-founder Niraj Mathur told EE Times, “What’s been particularly gratifying is that over the last year, there’s been more pull than us pushing."

"People have been coming to us specifically asking for analog AI solutions because they believe something new needs to happen.”

1000 TOPS/W is within reach

Blumind has already seen interest from wearable, automotive, and healthcare sectors. One of the examples the company gave was for a tire pressure monitoring system (TPMS) capable of analyzing road conditions.

The customer needed this to offer, “extreme power efficiency because it’s sitting in the tire, it’s got to last the lifetime of the tire, you don’t want to open up the tire to change the battery,” Mathur explained. Another potential use involved detecting heart signals through a pacemaker sensor powered by energy harvested from muscle movement, requiring only a few hundred nanoWatts of power.

The startup’s first product, an analog keyword spotting chip, is set for volume production in 2025. It will be available as both a standalone chip and a chiplet that integrates into microcontroller unit packages. “Chiplets are the other avenue of integration for our customers,” Mathur said in his interview with EE Times. This approach allows Blumind’s technology to complement fully programmable MCUs, focusing on always-on AI tasks.

Looking ahead, Blumind aims to scale its analog architecture for applications requiring much larger models, such as vision CNNs and eventually gigabit-sized small language models (SLMs). Mathur said the company’s goal of achieving 1000 TOPS/W is within reach, emphasizing the potential of analog-first, multi-die solutions.

Despite his company’s ambitious roadmap, Mathur stressed the importance of a pragmatic approach. “No-one has really brought analog compute to high volume production and delivered on its promise. We want to be the first to do that, but we want to walk before we try and run,” he said.

You might also like



from TechRadar - All the latest technology news https://ift.tt/negX8Kk

Monday, December 9, 2024

Latest Tech News


  • Biomemory's DNA-based solutions address data storage issues
  • DNA storage is compact, durable, environmentally friendly
  • $18M funding supports product development and industry partnerships

Biomemory, a French startup established in 2021, has long been working to develop DNA-based data storage technology.

It was the first company to make a DNA storage device available to the general public, marking an early step in commercializing this technology. Biomemory's approach involves encoding digital data within synthesized DNA strands by translating the DNA bases - A, C, G, and T - into binary code. Data can then be retrieved by sequencing the DNA and converting it back into binary.

DNA storage is viewed as a potential solution to the growing global demand for storage, driven by increasing data generation. It is estimated that by 2025, humanity will produce 175 zettabytes of data, a figure that challenges the capacity and sustainability of existing storage methods. DNA’s compact and durable nature offers an alternative that could reduce spatial and environmental footprints while providing long-term stability.

Funding secured

A number of startups have entered the DNA storage space in recent years, including Catalog, Ansa Biotechnologies, and Iridia in the United States, as well as Helixworks, DNA Script, and BioSistemika in Europe. Biomemory is focusing on creating end-to-end solutions for data centers, using bio-sourced DNA fragments that are designed to last for thousands of years without requiring energy for maintenance.

To further its efforts, Biomemory recently secured $18 million in Series A funding.

“This investment marks a pivotal moment for Biomemory and the future of data storage,” said Erfane Arwani, CEO and Co-founder of the startup. “With our DNA storage technology, we’re not just addressing today’s data challenges - we’re building solutions that will sustain the ecosystem for the next century and beyond. By sharing this value with our partners and collaborators, we aim to collectively advance the sector and foster a thriving data storage ecosystem.”

Biomemory intends to use the funds to develop its first-generation data storage appliance, optimize biotech processes, and quicken commercialization. Additional goals include forming partnerships with industry players and cloud providers and recruiting experts in molecular biology and engineering.

The technology offers the potential to store all of humanity’s data in a single data center rack and Biomemory plans to scale its molecular storage solutions to exabyte capacity by 2030, listing sustainability and durability as its key priorities.

You might also like



from TechRadar - All the latest technology news https://ift.tt/hTaFKbu

Sunday, December 8, 2024

Latest Tech News


  • Valve could be making a streaming box
  • Hints found in the Steam Deck code
  • No indications of a launch date yet

It might be five years since the Nvidia Shield last had a refresh – see our Nvidia Shield (2019) review for details – but the device remains one of the best streaming boxes in the business. We're now hearing it may get a new competitor, courtesy of Valve.

As per a Reddit thread analyzing changes to the Steam Deck code (via XDA Developers), it looks as though the software used on the portable console could soon be adapted to run on a streaming box connected to a television.

You'd then have a lightweight, versatile device that could both play games and stream video and audio to the big screen – much like the Nvidia Shield does. This is mostly speculation at this point, but we could definitely see it happening.

There are references in the code to an AMD 8540U processor, though this may only be referring to a prototype device, so the configuration could change. That would certainly offer more power than the current Steam Deck specs.

HDMI and Android

Steam Deck OLED in limited edition white color

The Valve Steam Deck (Image credit: Valve)

According to the tipster who spotted the code change, extra support for HDMI control is being added – and the changes match some of the code seen on ChromeOS devices, suggesting support for both Android and web apps.

Apart from that, there are no real details about what could be coming. We don't know anything in terms of dimensions or pricing, and there's no indication here about how long it's going to be before the product is announced (if it ever is).

Go all the way back to our Nvidia Shield (2015) review, and you'll see that it's always been an impressively versatile device. It's earned itself a relatively small but loyal group of users, though we haven't seen any signs that we'll ever get a new model.

What we did get a couple of months ago was the first software update for the Nvidia Shield in a year – though it was intended to squash some outstanding bugs on the streaming box, rather than add any new features.

You might also like



from TechRadar - All the latest technology news https://ift.tt/kRVvElg

Saturday, December 7, 2024

Latest Tech News


  • Nvidia’s Project Denver began as x86 but transitioned to Arm
  • Insider reports legal constraints drove Nvidia's pivot
  • The Arm-based Project Denver CPU debuted in 2011

During a technical session at the recent SC24 event, Dave Ditzel, founder of Esperanto Technologies, offered some fascinating insights into Nvidia’s early server processor efforts.

According to HPCwire, Ditzel, who was previously CEO of Transmeta, revealed that Nvidia’s first server CPU, Project Denver, initially started as an x86 CPU but transitioned to Arm due to legal constraints.

Ditzel says Nvidia’s shift to Arm was influenced by its licensing of Transmeta’s Tokamak technology, which could translate x86 code into a RISC instruction set.

Failed attempt to acquire Arm

IAs he explained, “Nvidia brought out a product called Denver. It was actually that same design. It originally started as an x86 [CPU], but through certain legal issues, had to turn itself into an Arm CPU.”

This decision, he said, laid the foundation for Nvidia’s alignment with Arm architecture. Tokamak, developed by Transmeta, was intended to be its third-generation x86 chip following the Crusoe and Efficeon processors. However, the project was never officially launched and was instead licensed to companies like Intel and Nvidia.

ntel, despite acquiring the design, did not announce a product based on it either. “You can guess as to all the reasons why or buy me a beer sometime,” Ditzel said.

Nvidia officially introduced Project Denver as an Arm-based CPU in 2011, later integrating it into its Tegra lineup. HPCwire reports that while there was initial enthusiasm around Arm servers, adoption was limited by challenges in the software ecosystem. Nvidia has since developed its Grace CPU and abandoned its attempt to acquire Arm after regulatory opposition.

Ditzel founded chip design firm Esperanto about seven years ago and because of his previous bad experiences with licensing x86, he opted for RISC-V because it was cheap and there were no legal concerns to get bogged down by.

“At least we have a playground where we can test some new things out, and some lawyer is not going to be ringing your bell,” Ditzel said.

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/4lvjYra

Friday, December 6, 2024

Latest Tech News

Update: On December 6, 2024, the European Pirate Party reported that the European Council Committee stopped the proposal (yet again) as more governments joined the list of countries against it. We made some edits to the page to reflect this.

The EU proposal to scan all your private communications to halt the spread of child sexual abuse material (CSAM) is back on regulators' agenda – but keeps being rejected.

What's been deemed by critics as Chat Control has seen many twists and turns since the European Commission presented the first version of the draft bill in May 2022. The latest development came in October 2024, when a last-minute decision by the Netherlands to abstain from the vote prompted the Hungarian Council Presidency to remove the matter from the planned discussion.

Now, about two months later, the controversial proposal has returned to the topics the EU Council discussed on December 4, 2024. On Friday, December 6, however, the European Council Committee stopped the proposal (yet again) as more governments joined the list of countries against it.

What is the EU CSAM scan bill?

As mentioned, lawmakers have implemented some changes to the EU CSAM bill amid growing criticism from the privacy, tech, and political benches.

Initially, the plan was to require messaging services and email providers to scan all your messages on the lookout for illegal material – no matter if these were encrypted, like WhatsApp or Signal chats, for example, to ensure that communications remain private between the sender and receiver.

Lawmakers suggested employing what's known as client side-scanning, a technique that experts, including some of the best VPN providers and messaging apps, have long warned against as it cannot be executed without breaking encryption protection. Even the UK halted this requirement under its Online Safety Act until "it's technically feasible to do so."

Fast-forward to June 2024, the second version of the EU proposal aims to target shared photos, videos, and URLs instead of text and audio messages upon users' permission. There's a caveat, though – you must consent to the shared material being scanned before being encrypted to keep using the functionality.

This wording made privacy experts furious, with Meredith Whittaker, President of the Signal Foundation, labeling this so-called 'upload moderation' as a "rhetorical game" instead.

In September, another version was leaked by Politico. Communications providers would be free to decide whether or not to use artificial intelligence to flag images and text chats as suspicious. These companies, however, would be required by law to scan all user chats and report when they found illegal content.

As per the European Pirate Party's data before the December 6 meeting, the greater majority of countries have already expressed their support for the new proposal. Even nations like France, who previously were among the opposed governments, have now joined the in-favor list.

At that time, only a few EU members remained either undecided (Italy, Portugal, and Finland) or against (Austria, Belgium, Czech Republic, Estonia, Germany, Luxemburg, Netherlands, Poland, and Slovenia).

A day before the voting (December 5, Patrick Breyer from the European Pirate Party reported that "unconfirmed rumors" claimed an unnamed critical government could join the countries backing up the proposal on Friday.

This, however, didn't happen as Finland voted against it during today's meeting, despite both Italy and Portugal reportedly passing from undecided to "in favor."



from TechRadar - All the latest technology news https://ift.tt/THEUyrX

Thursday, December 5, 2024

Latest Tech News

As 2025 approaches and 'New Year, new you' pressures build for those of us not keen on jogging or hitting the gym, you should know that there’s an awesome alternative to traditional fitness with Quell, one of my favorite workout apps of 2024. And today’s big announcement is that the gamified routines it offers are finally coming to Meta Quest headsets in January 2025.

For those of you that missed Quell at launch, it was originally a PC fitness platform. You’d use the controllers to punch your way through its fantasy adventure game Shardfall while wearing resistance bands that make it tougher than typical shadow boxing. The full-body HIT sessions felt intense, yet Shardfall did an excellent job of keeping me motivated by contextualizing my actions – I wasn’t just doing another rep for the sake of it, I was throwing another punch because I needed to defeat the monster in my way.

VR players can soon try this experience using a Meta Quest headset – sans the resistance bands unless they choose to buy some. That’s because Shardfall is coming to Quest as Shardfall: FitQuest VR for $19.99 / £14.99 at Meta.com, and I’m pretty darn excited about it.

Shardfall will be at home in VR

A person working out in their living room using the Quell system, they're punching a virtual enemy

Quell is great, but it'll be right at home in VR (Image credit: Quell)

The Quell hardware system costs $199 / £189 while the software subscription to access Shardfall (and future game releases) only costs $4.99 / £4.99 a month or $39.99 / £39.99 a year which I feel is a pretty good price for the kit when compared to other fitness gear or a gym membership, but it is an added expense that people might not be willing to pay for a system that can only be used with Quell games.

As a $19.99 / £14.99 add-on to your existing VR headset, Shardfall in VR is a much easier sell, and I strongly recommend everyone give it a try at launch. The combat exercises are an engaging boxing sim that does well to incorporate a full-body routine with ducks and jumps to dodge certain attacks and hazards, and it also includes jogging sections between fights that get your heart rate up.

It'll be a shame to lose the resistance bands, but again that could aid Shardfall’s accessibility as its exercises will be a little less challenging – though I expect the full-body routines will still give you a decent workout without them.

When the VR version launches I’ll certainly be there to give it a go, and if you’re looking for a more interesting way to kickstart your 2025 fitness schedule I recommend you join me.

You might also like



from TechRadar - All the latest technology news https://ift.tt/tL4Bz5c

Latest Tech News


  • Azure Integrated HSM boosts security with cryptographic key protection
  • Reduces latency and scales better than network-attached HSMs
  • Keys stay isolated, ensuring tamper-resistant, in-use protection

Microsoft has introduced a new hardware security module designed to boost cloud security by enabling cryptographic key protection directly within server environments. 

Azure Integrated HSM addresses latency and scalability challenges often associated with traditional network-attached HSMs while adhering to FIPS 140-3 Level 3 security requirements.

The new hardware module provides locally attached cryptographic services for encryption, decryption, signing, and verification. Keys remain isolated from software, including guest and host systems, ensuring strong physical and logical tamper protection. Unlike traditional HSMs, which introduce network latency or require key release to local environments, Azure Integrated HSM securely retains keys within the module for continuous in-use protection.

Coming to all new Microsoft data center servers

"As part of our systems approach in optimizing every layer in our infrastructure, security is a key priority, and we are designing our infrastructure hardware with multiple layers of defense with dedicated innovations to ensure robust protection for Microsoft and for our customers," noted Mark Russinovich, Microsoft’s CTO for Azure.

The module is designed to integrate seamlessly with both confidential and general-purpose virtual machines and containers, providing dedicated, secure partitions for each workload. These partitions are hardware-isolated, allowing workloads to access keys only through controlled oracle functions. This design boosts security and reduces latency with node-integrated connections and cryptographic hardware accelerators.

Azure Integrated HSM will be installed in all new servers across Microsoft data centers starting next year, bolstering protection across Azure’s hardware fleet. This deployment is part of the Secure Future Initiative, which also includes Adams Bridge quantum-resilient accelerator and Caliptra 2.0 silicon root of trust.

“By integrating advanced hardware security features such as the silicon root of trust and secure control modules, we are providing the foundation for the trust and security that Azure delivers to our customers,” Russinovich said. “We are committed to continuously enhancing our cloud hardware security capabilities to meet the evolving needs of our customers.”

You might also like



from TechRadar - All the latest technology news https://ift.tt/SXnFyKT

Wednesday, December 4, 2024

Latest Tech News


  • SemiQon announces first CMOS transistor for cryogenic conditions
  • Engineered for extreme cold: operates efficiently at 1 Kelvin
  • Transistor reduces heat dissipation 1,000x, consumes 0.1% usual power

Heat is widely recognized as the enemy of sensitive electronic components, but ultra-low temperatures can also pose serious performance challenges.

Now, SemiQon, a Finland-based company focused on quantum computing hardware, has announced the development of what it describes as the first CMOS transistor fully optimized for cryogenic conditions.

The transistor is engineered to function effectively at temperatures as low as 1 Kelvin (-272.15°C or -457.87°F), just 1 degree above absolute zero, where most quantum computers operate. According to the company, this innovation addresses key challenges in scaling quantum computers while also being compatible with existing CMOS manufacturing processes, requiring no new infrastructure.

Space-borne applications

SemiQon says its transistor reduces heat dissipation by 1,000 times compared to conventional room-temperature transistors and consumes only 0.1% of the power. This allows control and readout electronics to be located inside a cryostat with the processors, eliminating heat dissipation problems that could disrupt the system. SemiQon believes this solution simplifies the growing complexity of managing quantum processors as their scale increases.

“It was clear to us and others in the scientific community, that a transistor which can operate efficiently at ultra-low temperatures would offer substantial value to users in the advanced computing sector and wherever these devices are required to function in cryogenic conditions,” said Himadri Majumdar, CEO and Co-Founder of SemiQon.

“Our company is just 2 years old, and already we’ve delivered something which the world has never seen before. Our cryo-CMOS transistor will provide considerable advantages to users both in terms of CapEx and OpEx, as well as by enhancing the functionality of their hardware. This could potentially accelerate the development of quantum technologies, or even enable a new era of cryogenic electronics.”

The transistor’s potential extends beyond quantum computing to high-performance computing and space-borne applications. SemiQon also highlights its impact on energy efficiency, noting that cooling costs for data centers are projected to grow significantly in the coming years.

SemiQon says it expects to deliver its first cryo-optimized CMOS transistors to customers in 2025. A short technical paper on the new transistor can be found on the arXiv pre-print server.

More from TechRadar Pro



from TechRadar - All the latest technology news https://ift.tt/CDGNqyK

Tuesday, December 3, 2024

Latest Tech News


  • Geyser Data's Tape-as-a-Service reaches general availability
  • Built with Spectra Logic, integrates seamlessly with AWS S3
  • No egress fees, secure air-gapped tapes, energy-efficient solution

Back in May 2024, cloud archive provider Geyser Data and data storage and management firm Spectra Logic introduced a new Tape-as-a-Service (TaaS) cloud offering combining the durability and cost-efficiency of traditional tape storage with the flexibility of cloud services.

This TaaS solution addresses the growing demand for secure, cost-effective data storage, supporting large volumes while minimizing environmental impact.

Geyser Data claims up to 97% lower CO2 emissions, 87% less power usage, and 85% less e-waste compared to other cloud services. It offers enhanced security with dedicated tapes and full control over encryption keys, along with the freedom of immediate access.

Subscription service

Following a well-received beta phase, the TaaS offering is now generally available.

Nelson Nahum, CEO of Geyser Data, outlined some of the service's key benefits, saying, "New workloads like AI require cold data to be warmer. One of our customers' biggest challenges today is the unpredictable and skyrocketing costs tied to data retrieval and egress fees in other cloud environments. Our service provides a simple and transparent pricing model that eliminates these burdens while giving businesses the storage capacity they need without investing in new hardware."

The 'enterprise-class' tape archiving solution operates on a subscription basis. It integrates with S3 APIs, allowing businesses to manage and store large volumes of data without the variable costs typical of traditional cloud providers, the need for specialized expertise, or reliance on complex on-premises infrastructure.

“By integrating Spectra Logic's Tape Archive Platform-as-a-Service (TAPAS) with Geyser Data's robust cloud software management platform, we've developed a solution that drives significant cost savings while also addressing critical power consumption challenges faced by data-intensive technologies such as AI and machine learning," noted Mitch Seigle, Chief Marketing Officer of Spectra Logic.

"As organizations grapple with the demands of rapidly expanding data volumes, tape storage provides unmatched security, longevity, sustainability, and operational efficiency - making it a pivotal element in modern data infrastructure strategies.”

Geyser Data offers a straightforward pricing model at $28 per tape per month, with each tape capable of storing up to 18TB of uncompressed data. This equates to an effective cost of $1.56 per terabyte. There are no restrictions on the amount of data customers can archive or back up, as the Spectra Cube library is designed to scale to meet demand.

You might also like



from TechRadar - All the latest technology news https://ift.tt/Gnu7d4F

Could Apple's New Adaptive Power Feature Extend Your iPhone's Battery Life?

With this new feature being tested in the iOS 26 developer beta, you may be able to ditch the Low Power Mode setting in the future. from C...