Saturday, December 28, 2024

Latest Tech News


  • AWS is Netflix's only cloud computing platform
  • But AWS is also part of Amazon, which owns Amazon Prime Video, a huge rival to Netflix
  • Netflix engineers have been struggling to keep track of how much resources they use on AWS

Netflix, the world’s most popular streaming platform, may dominate home entertainment, but it’s struggling to manage one of its biggest operational challenges: cloud computing costs.

Despite its tech-forward image, Netflix has admitted it doesn’t fully know how much it spends on the cloud, an oversight made even more surprising given that its cloud provider, AWS, is part of Amazon - owner of Prime Video, one of Netflix’s largest competitors.

Relying on AWS for compute, storage, and networking, Netflix’s cloud infrastructure supports its global streaming service. Engineering teams use self-service tools to create and deploy applications, generating vast amounts of data. However, the complexity of this ecosystem makes it difficult for Netflix to understand exactly how resources are used and how costs accumulate.

Keeping its content flowing

The Platform Data Science Engineering (DSE) team at Netflix has taken on the task of untangling this problem. The team’s mission is to help the company’s engineers understand resource usage, efficiency, and associated costs.

Yet, as Netflix acknowledged in a recent blog post, its cloud cost management is still a work in progress.

To address the challenges it finds itself facing, Netflix has developed two tools: Foundational Platform Data (FPD) and Cloud Efficiency Analytics (CEA). FPD provides a centralized data layer with a standardized model, aggregating data from applications like Apache Spark. CEA builds on this by applying business logic to generate cost and ownership attribution, providing insights into efficiency and usage patterns.

The hurdles are significant. Netflix’s sprawling infrastructure includes services with multiple owners, varying cost heuristics, and multi-tenant platforms that complicate tracking.

Data delays and platform-specific customizations add a further layer of complexity. Regular audits and data transformations are necessary to maintain accuracy, but the company admits it has yet to achieve full visibility into its cloud spending.

Looking ahead, Netflix says it plans to expand its tools and incorporate predictive analytics and machine learning to optimize usage and detect cost anomalies.

While the company works to refine its approach, its situation highlights a striking irony: the world’s most popular streaming platform relies on its rival’s technology to deliver its own service, yet it is still figuring out the true cost of keeping its content flowing.

More from TechRadar Pro



from Latest from TechRadar US in News,opinion https://ift.tt/HxfgiUT

Friday, December 27, 2024

Best Hotel Mattresses in 2024

Did you know you can experience the luxury feeling of a hotel mattress at home? These are the best hotel mattresses to buy, tested by our experts.

from CNET https://ift.tt/tJ4ZnWy

Latest Tech News


  • Trillium has hit general availability just months after preview release
  • Powerful AI chip offers more than four times the training performance
  • Google uses it to train Gemini 2.0, the company's advanced AI model

Google has been developing Tensor Processing Units (TPUs), its custom AI accelerators, for over a decade, and a few months after being made available in preview, has announced that its sixth-generation TPU has reached general availability and is now available for rent.

Trillium doubles both the HBM capacity and the Interchip Interconnect bandwidth, and was was used to train Gemini 2.0, the tech giant’s flagship AI model.

Google reports it offers up to a 2.5x improvement in training performance per dollar compared to prior TPU generations, making it an appealing option for enterprises seeking efficient AI infrastructure.

Google Cloud’s AI Hypercomputer

Trillium delivers a range of other improvements over its predecessor, including more than four times the training performance. Energy efficiency has been increased by 67%, while peak compute performance per chip has risen by a factor of 4.7.

Trillium naturally improves inference performance as well. Google’s tests indicate over three times higher throughput for image generation models such as Stable Diffusion XL and nearly twice the throughput for large language models compared to earlier TPU generations.

The chip is also optimized for embedding-intensive models, with its third-generation SparseCore providing better performance for dynamic and data-dependent operations.

Trillium TPU also forms the foundation of Google Cloud’s AI Hypercomputer. This system features over 100,000 Trillium chips connected via a Jupiter network fabric delivering 13 Petabits/sec of bandwidth. It integrates optimized hardware, open software, and popular machine learning frameworks, including JAX, PyTorch, and TensorFlow.

With Trillium now generally available, Google Cloud customers have the opportunity to access the same hardware used to train Gemini 2.0, making high-performance AI infrastructure more accessible for a wide range of applications.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/6KJwYDz

Thursday, December 26, 2024

Latest Tech News


  • Project Infinity and Mobile Security Rewards Program bolster Samsung's security strategy
  • Red, Blue, and Purple teams safeguard Galaxy devices from cyber threats
  • CTI task force scours the Dark Web to prevent device breaches

Samsung has always prioritized security for its Galaxy smartphones, and with the launch of the Galaxy S24 series, it promised an unprecedented seven years of mobile security updates.

Behind this extended protection lies a secretive and highly specialized security initiative known as Project Infinity - but Samsung has now lifted the veil and provided some details about the project.

Project Infinity comprises multiple task forces which ensure that the billions of Galaxy smartphone users worldwide are protected from the ever-growing threat of cybercrime.

The invisible guardians of Galaxy devices

At the core of Project Infinity are three distinct teams, Red, Blue, and Purple, alongside a Cyber Threat Intelligence (CTI) taskforce. These groups operate globally in countries such as Vietnam, Poland, and Brazil, working in the shadows to prevent and mitigate cyberattacks.

Each team has a specific role, from proactive threat detection to creating and deploying defensive measures. Their work is largely invisible to the public, only surfacing when you receive a security patch on your device.

The CTI task force specializes in identifying potential cyber threats, ensuring that hackers can’t exploit vulnerabilities in Galaxy devices. The team scours the Deep Web and Dark Web, looking for signs of illicit activity, from malware to stolen data.

By analyzing system behaviors, such as unusual data requests or suspicious network traffic, the team can identify and neutralize threats, while collaborating with other departments to roll out security updates.

“Occasionally, we engage in security research by simulating real-world transactions," noted Justin Choi, Vice President and Head of the Security Team, Mobile eXperience Business at Samsung Electronics.

"We closely monitor forums and marketplaces for mentions of zero-day or N-day exploits targeting Galaxy devices, as well as any leaked intelligence that could potentially serve as an entry point for system infiltration.”

Samsung’s security operation is modeled on military-style tactics, with the Red and Blue teams simulating attacks and defenses, respectively.

Through techniques like "fuzzing," which involves throwing random data at software, they can find hidden vulnerabilities that might otherwise go unnoticed. Meanwhile, the Blue team works tirelessly to develop and implement patches that protect against these vulnerabilities.

The Purple team combines the expertise of both Red and Blue teams, focusing on critical areas of Galaxy’s security infrastructure. They also work with external security researchers to ensure no potential weak spot goes unnoticed.

You may also like



from Latest from TechRadar US in News,opinion https://ift.tt/DweYpmX

Latest Tech News


  • HBM4 chips poised to power Tesla's advanced AI ambitions
  • Dojo supercomputer to integrate Tesla’s high-performance HBM4 chips
  • Samsung and SK Hynix compete for Tesla's AI memory chip orders

As the high-bandwidth memory (HBM) market continues to grow, projected to reach $33 billion by 2027, the competition between Samsung and SK Hynix intensifies.

Tesla is fanning the flames as it has reportedly reached out to both Samsung and SK Hynix, two of South Korea's largest memory chipmakers, seeking samples of its next-generation HBM4 chips.

Now, a report from the Korean Economic Daily claims Tesla plans to evaluate these samples for potential integration into its custom-built Dojo supercomputer, a critical system designed to power the company’s AI ambitions, including its self-driving vehicle technology.

Tesla’s ambitious AI and HBM4 plans

The Dojo supercomputer, driven by Tesla’s proprietary D1 AI chip, helps train the neural networks required for its Full Self-Driving (FSD) feature. This latest request suggests that Tesla is gearing up to replace older HBM2e chips with the more advanced HBM4, which offers significant improvements in speed, power efficiency, and overall performance. The company is also expected to incorporate HBM4 chips into its AI data centers and future self-driving cars.

Samsung and SK Hynix, long-time rivals in the memory chip market, are both preparing prototypes of HBM4 chips for Tesla. These companies are also aggressively developing customized HBM4 solutions for major U.S. tech companies like Microsoft, Meta, and Google.

According to industry sources, SK Hynix remains the current leader in the high-bandwidth memory (HBM) market, supplying HBM3e chips to NVIDIA and holding a significant market share. However, Samsung is quickly closing the gap, forming partnerships with companies like Taiwan Semiconductor Manufacturing Company (TSMC) to produce key components for its HBM4 chips.

SK Hynix seems to have made progress with its HBM4 chip. The company claims that its solution delivers 1.4 times the bandwidth of HBM3e while consuming 30% less power. With a bandwidth expected to exceed 1.65 terabytes per second (TB/s) and reduced power consumption, the HBM4 chips offer the performance and efficiency needed to train massive AI models using Tesla’s Dojo supercomputer.

The new HBM4 chips are also expected to feature a logic die at the base of the chip stack, which functions as the control unit for memory dies. This logic die design allows for faster data processing and better energy efficiency, making HBM4 an ideal fit for Tesla’s AI-driven applications.

Both companies are expected to accelerate their HBM4 development timelines, with SK Hynix aiming to deliver the chips to customers in late 2025. Samsung, on the other hand, is pushing its production plans with its advanced 4-nanometer (nm) foundry process, which could help it secure a competitive edge in the global HBM market.

Via TrendForce

You may also like



from Latest from TechRadar US in News,opinion https://ift.tt/sSFDUyt

Latest Tech News


  • Broadcom is rumored to have an ongoing partnership with Apple to help it build its own AI chip
  • TikTok parent company, ByteDance, OpenAI also reportedly in the picture
  • The move comes as hyperscalers look to reduce their dependency on AI chips from Nvidia

Nvidia has ridden the generative AI boom to record-breaking revenues and profits over the past two years, and while it remains well ahead of its competitors, the company is facing growing pressure - not only from rival AMD but also from hyperscalers which have traditionally relied on Nvidia GPUs but are now looking to reduce their dependence on its hardware.

As The Next Platform notes, “Nvidia’s biggest problem is that its biggest customers have massive enough IT expenditures that they can afford to compete with Nvidia and AMD and design their own XPUs for serial and parallel computing. And when they do so, it is chip design and manufacturing houses Broadcom and Marvell, who have vast expertise running chippery through the foundries of Taiwan Semiconductor Manufacturing Co, who will be benefiting.”

In its most recent earnings conference call, Hock Tan, President and CEO of Broadcom, told investors, “Specific hyperscalers have begun their respective journeys to develop their own custom AI accelerators or XPUs, as well as network these XPUs with open and scalable Ethernet connectivity. As you know, we currently have three hyper-scale customers who have developed their own multi-generational AI XPU roadmap to be deployed at varying rates over the next three years. In 2027, we believe each of them plans to deploy one million XPU clusters across a single fabric.”

Gaining its fair share

Without naming specific companies, Tan added, “To compound this, we have been selected by two additional hyperscalers and are in advanced development for their own next-generation AI XPUs.”

It is widely believed that Broadcom is working with Google and Meta, and as we previously reported, with ByteDance and OpenAI on custom AI chips.

Apple is also thought to be developing its first artificial intelligence server chip, codenamed “Baltra,” with Broadcom providing the advanced networking technologies essential for AI processing.

During the Q&A portion of the earnings call, when Tan was asked about market share, he responded, “All we are going to do is gain our fair share. We're just very well positioned today, having the best technology, very relevant in this space. We have, by far, one of the best combination technologies out there to do XPUs and to connect those XPUs. The silicon technology that enables it, we have it here in Broadcom by the boatloads, which is why we are very well positioned with these three customers of ours.”

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/8N9PaFG

Wednesday, December 25, 2024

Best Outdoor Smart Plugs for 2024

Being outside doesn’t mean you can’t stay connected, thanks to our list of the best outdoor smart plugs you can buy in 2024.

from CNET https://ift.tt/LuNyUYz

Latest Tech News


  • Castrol planning fluid-as-a-service model launch to eliminate waste and increase sustainability
  • Immersion cooling has emerged as an essential component in the race to reach AGI
  • Castrol wants to play a key role in immersion cooling as integrated smart city data centers become mainstream

Founded in 1899, CC Wakefield & Co. Limited initially focused on producing lubricants for trains and heavy machinery. Over time, the company expanded its expertise to develop specialized lubricants for automobiles and airplane engines, incorporating castor oil - a plant-based oil derived from castor beans - to ensure performance under extreme temperature conditions. The product was called Castrol, and the company was later renamed after its famous creation.

125 years later, Castrol remains at the forefront of innovation, applying its extensive expertise in fluid engineering to address modern challenges.

One of its key focus areas is the development of advanced dielectric fluids for immersion cooling systems. This approach sees entire servers submerged in non-conductive fluids that absorb and transfer heat away from the components, eliminating the need for traditional fans.

Advanced thermal management

The Castrol ON Liquid Cooling Centre of Excellence in Pangbourne, UK, serves as a state-of-the-art research and development hub for liquid cooling technologies.

The facility develops customized solutions and rigorously tests fluid dynamics, material compatibility, and server performance, to address the challenges of traditional cooling methods.

In a recent visit, StorageReview had the opportunity to see Castrol’s cutting-edge immersion tanks from providers like GRC and Submer and was impressed by the adaptability and efficiency of the solutions.

Writer Jordan Ranous noted, “In one of the test cells, we observed GRC’s tank, which had a striking green glow due to the specific fluid Castrol was using. The servers submerged in this tank were undergoing compatibility and performance testing. Castrol ensures that every component, from CPUs to cables, can operate effectively in immersion cooling environments without degradation.”

Castrol’s ON range of single-phase dielectric fluids, including DC15 and DC20, aims to deliver advanced thermal management, durability, and safety while maintaining efficient performance at operating temperatures between 40°C and 50°C, with some systems capable of handling up to 70°C.

Chris Lockett, VP of Electrification and Castrol Product Innovation at BP, Castrol’s parent company, told StorageReview, “At the moment, about 40% of power consumption in data centers goes toward cooling. Immersion cooling can drop that figure to less than 5%, significantly lowering power and water usage.”

Data centers account for an estimated 2–3% of global power consumption, with current liquid cooling efforts primarily focused on direct-to-chip solutions. Immersion cooling has the potential to establish a new standard for thermal management and Castrol wants to lead this transformation, positioning itself as “a one-stop partner for the liquid cooling solutions of today and tomorrow.”

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/RNmptK2

Tuesday, December 24, 2024

Best Internet Providers in North Dakota

The Peace Garden State boasts fast local and national internet providers. Here are our top picks for North Dakota.

from CNET https://ift.tt/xsilODL

Latest Tech News


  • $250 GPU card is competitive with both the GeForce 4060 and the RX 7600 on numerous benchmarks
  • However, both are set to be replaced by new models launching at CES 2025
  • Driver updates from Intel will hopefully drive the performance of the B580 even further

Over two years after its first discrete GPU release, Intel has launched the Arc B580 “Battlemage,” marking its second generation of dedicated graphics cards.

The B580, which will mostly be sold through add-in-board (AIB) partners like Maxon, Sparkle, and ASRock, features Intel’s updated Xe2 architecture.

It offers efficiency improvements and second-generation Ray Tracing Units (RTUs) alongside enhanced XMX engines, Intel’s counterpart to Nvidia’s Tensor cores.

Unfortunate timing

Puget Systems recently put the $250 GPU card through its paces and found it competes effectively with Nvidia’s GeForce RTX 4060 and AMD’s Radeon RX 7600 across a range of benchmarks. With 12GB of VRAM, the B580 certainly stands out in the budget category, surpassing the RTX 4060’s 8GB at a lower price point.

This additional memory gives it an edge in workflows demanding higher VRAM capacity, such as GPU effects in Premiere Pro and Unreal Engine, but performance in creative applications delivered mixed, and surprising, results.

In graphics-heavy tasks like GPU effects for DaVinci Resolve, Adobe After Effects, and Unreal Engine, the B580 impressed, often matching or exceeding more expensive GPUs. Puget Systems noted the B580 matched the RTX 4060 across resolutions in Unreal Engine while benefiting from its superior VRAM capacity.

Unfortunately, inconsistencies in media acceleration held it back in other areas. In Premiere Pro, for example, Intel’s hardware acceleration for HEVC codecs lagged behind expectations, with Puget Systems observing slower results compared to software-based processing. These issues appear to be driver-related, something Intel is likely to address in upcoming updates.

Shortly after it launched in 2022, Puget Systems tested the Arc A750 (8GB and 16GB models) and came away disappointed. The B580 shows clear improvements over its predecessor, and Intel’s continued driver development will no doubt extend the performance of the B580 even further. Intel's release timing is unfortunate, however.

While the B580 is a strong contender in the entry-level segment right now, Nvidia and AMD are expected to reveal replacements for the GeForce 4060 and the RX 7600 at CES 2025, and those new models are likely to diminish the appeal, and competitiveness, of Intel's new GPU significantly.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/6WrITmd

Monday, December 23, 2024

A Week Left to Spend Your 2024 FSA Money: How It Works and What You Can Buy

If you don't use your Flexible Spending Account funds, you could lose them at the end of the year.

from CNET https://ift.tt/U6F10uA

Latest Tech News


  • HBM is fundamental to the AI revolution as it allows ultra fast data transfer close to the GPU
  • Scaling HBM performance is difficult if it sticks to JEDEC protocols
  • Marvell and others wants to develop a custom HBM architecture to accelerate its development

Marvell Technology has unveiled a custom HBM compute architecture designed to increase the efficiency and performance of XPUs, a key component in the rapidly evolving cloud infrastructure landscape.

The new architecture, developed in collaboration with memory giants Micron, Samsung, and SK Hynix, aims to address limitations in traditional memory integration by offering tailored solutions for next-generation data center needs.

The architecture focuses on improving how XPUs - used in advanced AI and cloud computing systems - handle memory. By optimizing the interfaces between AI compute silicon dies and High Bandwidth Memory stacks, Marvell claims the technology reduces power consumption by up to 70% compared to standard HBM implementations.

Moving away from JEDEC

Additionally, its redesign reportedly decreases silicon real estate requirements by as much as 25%, allowing cloud operators to expand compute capacity or include more memory. This could potentially allow XPUs to support up to 33% more HBM stacks, massively boosting memory density.

“The leading cloud data center operators have scaled with custom infrastructure. Enhancing XPUs by tailoring HBM for specific performance, power, and total cost of ownership is the latest step in a new paradigm in the way AI accelerators are designed and delivered,” Will Chu, Senior Vice President and General Manager of the Custom, Compute and Storage Group at Marvell said.

“We’re very grateful to work with leading memory designers to accelerate this revolution and, help cloud data center operators continue to scale their XPUs and infrastructure for the AI era.”

HBM plays a central role in XPUs, which use advanced packaging technology to integrate memory and processing power. Traditional architectures, however, limit scalability and energy efficiency.

Marvell’s new approach modifies the HBM stack itself and its integration, aiming to deliver better performance for less power and lower costs - key considerations for hyperscalers who are continually seeking to manage rising energy demands in data centers.

ServeTheHome’s Patrick Kennedy, who reported the news live from Marvell Analyst Day 2024, noted the cHBM (custom HBM) is not a JEDEC solution and so will not be standard off the shelf HBM.

“Moving memory away from JEDEC standards and into customization for hyperscalers is a monumental move in the industry,” he writes. “This shows Marvell has some big hyperscale XPU wins since this type of customization in the memory space does not happen for small orders.”

The collaboration with leading memory makers reflects a broader trend in the industry toward highly customized hardware.

“Increased memory capacity and bandwidth will help cloud operators efficiently scale their infrastructure for the AI era,” said Raj Narasimhan, senior vice president and general manager of Micron’s Compute and Networking Business Unit.

“Strategic collaborations focused on power efficiency, such as the one we have with Marvell, will build on Micron’s industry-leading HBM power specs, and provide hyperscalers with a robust platform to deliver the capabilities and optimal performance required to scale AI.”

More from TechRadar Pro



from Latest from TechRadar US in News,opinion https://ift.tt/9UONTXo

Sunday, December 22, 2024

Best Internet Providers in Las Vegas, Nevada

Las Vegas has a decent variety of options for good internet. This list will help you make your choice based on speed, value and availability.

from CNET https://ift.tt/DVQl7xY

Latest Tech News


  • First look at Dell Pro Max 18 Plus emerges in new images
  • Pictures show a completely redesigned mobile workstation laptop
  • Pro Max could either replace popular Precision range or be a whole new range, offering up to 256GB RAM and up to 16TB SSD

Leaked details have suggest Dell is developing a new addition to its workstation offerings designed to deliver high-performance capabilities for professional workloads.

Available in two sizes, the Dell Pro Max 18 Plus is expected to debut officially at CES 2025 and could either replace the popular Precision range or form an entirely new lineup.

The device allegedly features an 18-inch display, while the Pro Max 16 Plus provides a smaller 16-inch alternative with similar specifications. According to information shared by Song1118 on Weibo, which includes Dell marketing slides, the laptops will be powered by Intel’s upcoming Core Ultra 200HX “Arrow Lake-HX” CPUs. For graphics, the series will reportedly feature Nvidia’s Ada-based RTX 5000-class workstation GPUs, though the exact model isn’t named in the leaked documents.

Triple-fan cooling system

The Pro Max series is set to offer up to 200 watts for the CPU/GPU combination in the 18-inch version and 170 watts in the 16-inch model. VideoCardz notes that while we have already seen much higher targets in ultra-high-end gaming machines, “this would be the first laptop confirmed to offer 200W for a next-gen Intel/Nvidia combo.”

The laptops will reportedly support up to 256GB of CAMM2 memory. The 18-inch model can accommodate up to 16TB of storage via four M.2 2280 SSD slots, while the 16-inch version supports 12TB with three slots. The heat generated by these high-power components will be managed by an “industry first” triple-fan cooling system.

Additional features look to include a magnesium alloy body to reduce weight, an 8MP camera, and a tandem OLED display option. Connectivity options include Thunderbolt 5 (80/120Gbps), WiFi 7, Bluetooth 5.4, and optional 5G WWAN. The two laptops also feature a quick-access bottom cover for easy serviceability and repairability of key components like batteries, memory, and storage.

The Dell Pro Max 16/18 Plus laptops are expected to be officially unveiled along with pricing at CES on January 7, 2025, with a mid-2025 release window.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/IsbKH3r

Saturday, December 21, 2024

Latest Tech News

  • Shuttle XH610G2 offers compact design supporting Intel Core processors up to 24 cores
  • Exclusive heat pipe technology ensures reliable operation in demanding environments
  • Flexible storage options include M.2 slots and SATA interfaces

Shuttle has released its latest mini PC, aimed at meeting the diverse demands of modern commercial tasks.

With a small 5-liter chassis and a compact design measuring just 250mm x 200mm x 95mm, the Shuttle XH610G2 employs the Intel H610 chipset, making it compatible with a broad spectrum of Intel Core processors, from the latest 14th Gen models back to the 12th Gen series.

The company says the device is designed to handle applications that require significant computational power like image recognition, 3D video creation, and AI data processing.

Shuttle XH610G2

The Shuttle XH610G2 comes with an exclusive heat pipe cooling technology which allows the workstation to operate reliably even in demanding environments, being capable of withstanding temperatures from 0 to 50 degrees Celsius, making it suitable for continuous operation in various commercial settings.

The Shuttle XH610G2 can accommodate Intel Core models with up to 24 cores and a peak clock speed of 5.8GHz. This processing power allows the workstation to handle intensive tasks while staying within a 65W thermal design power (TDP) limit. The graphics are enhanced by the integrated Intel UHD graphics with Xe architecture, offering capabilities to manage demanding visual applications, from high-quality media playback to 4K triple-display setups. The inclusion of dual HDMI 2.0b ports and a DisplayPort output facilitates independent 4K display support.

The XH610G2 offers extensive customization and scalability with support for dual PCIe slots, one x16 and one x1, allowing users to install discrete graphics cards or other high-performance components like video capture cards.

For memory, the XH610G2 supports up to 64GB of DDR5-5600 SO-DIMM memory, split across two slots, making ideal for resource-intensive applications, providing the system with the necessary power to handle complex computational tasks efficiently. Running at a low 1.1V, this memory configuration also minimizes energy consumption, which can be a significant advantage in environments conscious of power usage.

In terms of storage, this device features a SATA 6.0Gb/s interface for a 2.5-inch SSD or HDD, along with two M.2 slots for NVMe and SATA storage options. Users are recommended to choose a SATA SSD over a traditional HDD to ensure faster performance.

The I/O options on the XH610G2 further enhance its flexibility, with four USB 3.2 Gen 1 ports, two Ethernet ports, one supporting 1GbE and another 2.5GbE, and an optional RS232 COM port offering enhanced compatibility for specialized peripheral connections, which can be particularly useful in industrial or legacy environments.

Furthermore, the compact chassis includes M.2 expansion slots for both WLAN and LTE adapters, providing options for wireless connectivity that can be critical in setups where wired connections are not feasible.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/YuyqwPS

Heat Domes and Surging Grid Demand Threaten US Power Grids with Blackouts

A new report shows a sharp increase in peak electricity demand, leading to blackout concerns in multiple states. Here's how experts say ...