Friday, December 20, 2024

Latest Tech News


  • OpenAI announced upcoming o3 and o3-mini AI models.
  • The new models are enhanced "reasoning" AI models that build on the o1 and o1-mini models released this year.
  • Both models handily outperform existing AI models and will roll out in the next few months.

The final day of the 12 Days of OpenAI, brought back OpenAI CEO Sam Altman to show off a brand new set of AI models coming in the new year. The o3 and o3-mini models are enhanced versions of the relatively new o1 and o1-mini models. They're designed to think before they speak, reasoning out their answers. The mini version is smaller and aimed more at carrying out a limited set of specific tasks but with the same approach.

OpenAI is calling it a big step toward artificial general intelligence (AGI), which is a pretty bold claim for what is, in some ways, a mild improvement to an already powerful model. You might have noticed there's a number missing between the current o1 and the upcoming o3 model. According to Altman, that's because OpenAI wants to avoid any confusion with British telecom company O2.

So, what makes o3 special? Unlike regular AI models that spit out answers quickly, o3 takes a beat to reason things out. This “private chain of thought” lets the model fact-check itself before responding, which helps it avoid some of the classic AI pitfalls, like confidently spewing out wrong answers. This extra thinking time can make o3 slower, even if only a little bit, but the payoff is better accuracy, especially in areas like math, science, and coding.

One great aspect of the new models is that you can adjust that extra thinking time manually. If you’re in a hurry, you can set it to “low compute” for quick responses. But if you want top-notch reasoning, crank it up to “high compute” and give it a little more time to mull things over. In tests, o3 has easily outstripped its predecessor.

This is not quite AGI; o3 can't take over for humans in every way. It also does not reach OpenAI's definition of AGI, which describes models that outperform humans in the most economically valuable projects. Still, should OpenAI reach that goal, things get interesting for its partnership with Microsoft since that would end OpenAI's obligation to give Microsoft exclusive access to the most advanced AI models.

New year, new models

Right now, o3 and its mini counterpart aren’t available to everyone. OpenAI is giving safety researchers a sneak peek via Copilot Labs, and the rest of us can expect the o3-mini model to drop in late January, with the full o3 following soon after. It’s a careful, measured rollout, which makes sense given the kind of power and complexity we’re talking about here.

Still, o3 gives us a glimpse of where things are headed: AI that doesn’t just generate content but actually thinks through problems. Whether it gets us to AGI or not, it’s clear that smarter, reasoning-driven AI is the next frontier. For now, we’ll just have to wait and see if o3 lives up to the hype or if this last gift from OpenAI is just a disguised lump of coal.

You might also like



from Latest from TechRadar US in News,opinion https://ift.tt/AtjHbLM

Latest Tech News


  • Asus' latest monitor releases come with a kit to mount a mini PC at the back
  • There's also a groove to place your smartphone, plus an integrated USB hub
  • Sadly it is not a 4K display, merely a full HD+ one

As Mini PCs are becoming increasingly powerful, offering a compact design and a wealth of ports, they offer a versatile solution for users who need a powerful setup but don’t necessarily have the workspace to dedicate to a traditional desktop PC.

Recognizing this trend, Asus has introduced two 24-inch monitors, the BE248CFN and BE248QF, which are designed to accommodate these miniature marvels. Each monitor includes a mounting kit to securely attach a mini PC at the back of the stand, positioned closer to the base for easier access.

The two monitors offer other practical features, including a groove at the base that you can use to stash a smartphone. There’s also an integrated USB hub for users managing multiple devices.

Not 4K, sadly

Asus BE248CFN screen with mini PC

(Image credit: Asus)

Both models offer ergonomic adjustments to suit various viewing preferences. The stands support tilt from -5 to 35 degrees, swivel 180 degrees left and right, pivot 90 degrees in either direction, and 130mm of height adjustment. The IPS panels deliver wide 178-degree viewing angles and 16.7 million colors, with a 5ms response time, 350cd/m² brightness, and a contrast ratio of 3,000:1.

Rather disappointingly, the display resolution of the two screens is Full HD+ (1,920 x 1,200), rather than 4K upwards, which may limit their appeal to those requiring higher detail or sharper visuals, such as content creators, or those who like to have a lot of windows open on screen at the same time.

Connectivity varies slightly between the two models. The BE248CFN includes HDMI 1.4, DisplayPort 1.4, USB Type-C with a 96W power delivery function, a four-port USB 3.2 Gen 1 hub, and Gigabit Ethernet. The BE248QF adds a mini D-Sub 15-pin connector, catering to users with legacy hardware.

Both monitors incorporate 2W stereo speakers and Asus Eye Care technologies, such as Flicker-Free and Low Blue Light, which should make them comfortable to use during extended work sessions.

There’s no word on pricing or global availability as yet, but they should be on sale soon, starting in Japan, before hopefully heading to other countries.

You might also like




from Latest from TechRadar US in News,opinion https://ift.tt/AIZ1pNj

Thursday, December 19, 2024

I Set Up My Own ADT Home Security System. Here's How It Works

Commentary: I didn't need a technician to come to my home to set up ADT's smart security system. Here's what it includes and how I did my own DIY installation.

from CNET https://ift.tt/VB5lSek

Latest Tech News


  • Apple developing "Baltra" server chip for AI, targeting 2026 production
  • Israeli silicon team leading project; Mac chip canceled for focus
  • Broadcom collaboration and TSMC’s N3P tech to enhance development

Apple is reportedly developing its first server chip tailored specifically for artificial intelligence.

A paywalled report by Wayne Ma and Qianer Liu in The Information claims the project, codenamed “Baltra,” aims to address the growing computational demands of AI-driven features and is expected to enter mass production by 2026.

Apple’s silicon design team in Israel, which was responsible for designing the processors that replaced Intel chips in Macs in 2020, is now leading the development of the AI processor, according to sources. To support this effort, Apple has reportedly canceled the development of a high-performance Mac chip made up of four smaller chips stitched together.

Central to Apple’s efforts

The report notes this decision, made over the summer, is intended to free up engineers in Israel to focus on Baltra, signaling Apple’s shift in priorities toward AI hardware.

Apple is working with semiconductor giant Broadcom on this project, using the company’s advanced networking technologies needed for AI processing. While Apple usually designs its chips in-house, Broadcom’s role is expected to focus on networking solutions, marking a new direction in their partnership.

To make the AI chip, The Information says Apple plans to use TSMC’s advanced N3P process, an upgrade from the technology behind its latest processors, like the M4. This move highlights Apple’s focus on enhancing performance and efficiency in its chip designs.

The Baltra chip is expected to drive Apple’s efforts to integrate AI more deeply into its ecosystem. By leveraging Broadcom’s networking expertise and TSMC's advanced manufacturing techniques, Apple appears determined to catch up to rivals in the AI space and establish a stronger presence in the industry.

In November 2024, we reported that Apple approached its long-time manufacturing partner Foxconn to build AI servers in Taiwan. These servers, using Apple’s M-series chips, are intended to support Apple Intelligence features in iPhones, iPads, and MacBooks.

You might also like



from TechRadar - All the latest technology news https://ift.tt/7KHpRVC

Wednesday, December 18, 2024

Sony’s WF-1000XM5 Wireless Earbuds Make a Great Gift at This Record-Low Price

The Sony WF-1000XM5 wireless earbuds offer superb sound quality and you can now snag them at Amazon for $198, their lowest price ever.

from CNET https://ift.tt/CFWAean

Latest Tech News


  • Huawei may be adding HBM support to Kunpeng SoC
  • Clues hint at a replacement for the Kunpeng 920, launched in 2019
  • New SoC with HBM may target HPC, server market rivals

Huawei engineers have reportedly released new Linux patches to enable driver support for High Bandwidth Memory (HBM) management on the company’s ARM-based Kunpeng high-performance SoC.

The Kunpeng 920, which debuted in January 2019 as the company’s first server CPU, is a 7nm processor featuring up to 64 cores based on the Armv8.2 architecture. It supports eight DDR4 memory channels and has a thermal design power (TDP) of up to 180W. While these specifications were competitive when first introduced, things have moved on significantly since.

Introducing a new Kunpeng SoC with integrated HBM would align with industry trends as companies seek to boost memory bandwidth and performance in response to increasingly demanding workloads. It could also signal Huawei’s efforts to maintain competitiveness in the HPC and server markets dominated by Intel Xeon and AMD EPYC.

No official announcement... yet

Phoronix’s Michael Larabel notes that Huawei has not yet formally announced a new Kunpeng SoC (with or without HBM), and references to it are sparse. Kernel patches, however, have previously indicated work on integrating HBM into the platform.

The latest patches specifically address power control for HBM devices on the Kunpeng SoC, introducing the ability to power on or off HBM caches depending on workload requirements.

The patch series includes detailed descriptions of this functionality. Huawei explains that HBM offers higher bandwidth but consumes more power. The proposed drivers will allow users to manage HBM power consumption, optimizing energy use for workloads that do not require high memory bandwidth.

The patches also introduce a driver for HBM cache, enabling user-space control over this feature. By using HBM as a cache, operating systems can leverage its bandwidth benefits without needing direct awareness of the cache’s presence. When workloads are less demanding, the cache can be powered down to save energy.

While we don't have any concrete details on future Kunpeng SoCs, integrating HBM could potentially allow them compete more effectively against other ARM-based server processors, as well as Intel’s latest Xeon and AMD EPYC offerings.

You might also like



from TechRadar - All the latest technology news https://ift.tt/fexBqHY

Tuesday, December 17, 2024

Latest Tech News


  • Slim-Llama reduces power needs using binary/ternary quantization
  • Achieves 4.59x efficiency boost, consuming 4.69–82.07mW at scale
  • Supports 3B-parameter models with 489ms latency, enabling efficiency

Traditional large language models (LLMs) often suffer from excessive power demands due to frequent external memory access - however researchers at the Korea Advanced Institute of Science and Technology (KAIST), have now developed Slim-Llama, an ASIC designed to address this issue through clever quantization and data management.

Slim-Llama employs binary/ternary quantization which reduces the precision of model weights to just 1 or 2 bits, significantly lowering the computational and memory requirements.

To further improve efficiency, it integrates a Sparsity-aware Look-up Table, improving sparse data handling and reducing unnecessary computations. The design also incorporates an output reuse scheme and index vector reordering, minimizing redundant operations and improving data flow efficiency.

Reduced dependency on external memory

According to the team, the technology demonstrates a 4.59x improvement in benchmark energy efficiency compared to previous state-of-the-art solutions.

Slim-Llama achieves system power consumption as low as 4.69mW at 25MHz and scales to 82.07mW at 200MHz, maintaining impressive energy efficiency even at higher frequencies. It is capable of delivering peak performance of up to 4.92 TOPS at 1.31 TOPS/W, further showcasing its efficiency.

The chip features a total die area of 20.25mm², utilizing Samsung’s 28nm CMOS technology. With 500KB of on-chip SRAM, Slim-Llama reduces dependency on external memory, significantly cutting energy costs associated with data movement. The system supports external bandwidth of 1.6GB/s at 200MHz, promising smooth data handling.

Slim-Llama supports models like Llama 1bit and Llama 1.5bit, with up to 3 billion parameters, and KAIST says it delivers benchmark performance that meets the demands of modern AI applications. With a latency of 489ms for the Llama 1bit model, Slim-Llama demonstrates both efficiency and performance, and making it the first ASIC to run billion-parameter models with such low power consumption.

Although it's early days, this breakthrough in energy-efficient computing could potentially pave the way for more sustainable and accessible AI hardware solutions, catering to the growing demand for efficient LLM deployment. The KAIST team is set to reveal more about Slim-Llama at the 2025 IEEE International Solid-State Circuits Conference in San Francisco on Wednesday, February 19.

You might also like



from TechRadar - All the latest technology news https://ift.tt/2UdjqZW

Monday, December 16, 2024

Meta's Ray-Bans Can Now Do Real-Time Live AI And Translation

Continuous AI assistance through always-on cameras, plus translation, are coming to Meta's ever-evolving AI-equipped glasses.

from CNET https://ift.tt/LX9z1sp

Latest Tech News


  • Polysoft offers SSD upgrades for Mac Studio at significantly lower prices
  • StudioDrive features overvoltage protection and durable components
  • Offered in 2TB, 4TB, and 8TB capacities, shipping next year

Apple introduced the Mac Studio in 2022 with the M1 chip, followed by the M2 model in 2023, and although these compact powerhouses have been lauded for their performance, buyers have rightly expressed concerns about the limited base SSD configurations and the absence of post-purchase upgrade options.

External USB-C or Thunderbolt SSDs are a common workaround for users seeking additional storage, but they don't match the speed and convenience of internal storage solutions.

Stepping in to address this gap, French company Polysoft has created the first publicly available SSD upgrade solution for Apple Silicon devices. Offered at a fraction of Apple’s prices, these SSD modules are the result of an extensive reverse-engineering process.

Better than Apple

Unlike SSDs used in PCs, Apple’s storage modules are challenging to replicate due to their integration with the M1 and M2 chips, where the storage controller resides.

Polysoft’s efforts included detailed disassembly, component analysis, and redesign, culminating in the StudioDrive SSD which is set to launch next year following a successful Kickstarter campaign.

Polysoft claims its SSDs not only replicate Apple’s modules but also improve on them.

A key difference is the inclusion of "RIROP" (Rossmann Is Right Overvoltage Protection), a safeguard inspired by Louis Rossmann’s work on hardware reliability. This feature reportedly protects against voltage surges, reducing the risk of catastrophic data loss due to hardware failure.

The StudioDrive product line supports both M1 and M2 Mac Studio models. It includes blank boards for enthusiasts and pre-configured options in 2TB, 4TB, and 8TB capacities. Polysoft says that the modules use high-quality Kioxia and Hynix TLC NANDs, offering performance and durability comparable to Apple’s original storage solutions. The drives are backed by a five-year warranty and have a lifespan of up to 14,000 TBW.

Pricing starts at €399 ($419) for 2TB, €799 ($839) for 4TB, and €1,099 ($1,155) for 8TB. While these upgrades will no doubt be viewed as an affordable, and welcome solution by many Mac Studio owners, users should be aware that installing third-party storage will void Apple’s warranty.

You might also like



from TechRadar - All the latest technology news https://ift.tt/aKyTHjP

Sunday, December 15, 2024

Getting New Glasses? This Is How to Pick the Best Glasses for Your Face Shape and Skin Color

Finding the perfect pair of glasses is a little easier when you follow these tips.

from CNET https://ift.tt/9r14IFS

Latest Tech News


  • Minisforum's customizable MS-A1 has AM5 socket for Ryzen CPUs
  • Compact design with up to 16TB storage, includes OCuLink port
  • Wi-Fi 6E, USB4, and advanced cooling for high performance

The Minisforum MS-A1 is the latest addition to the company's line of powerful mini PCs, and is the spiritual successor to the MS-01 model.

Unlike its predecessor, the MS-A1 introduces the option of swapping CPUs, utilizing an AM5 socket to take various AMD Ryzen processors, including AMD's 7000 series, 8000 PHX architecture (8700G/8600G), and potentially the AMD 9000 series following a BIOS update. It supports up to AMD 8700G APU for graphics.

The Minisforum MS-A1 is available as a barebone system (without a CPU or OS) starting at $259 or as a pre-configured model. At the moment, there’s an offer to save $20, bringing the barebone price down to $239. You can add the Minisforum Deg1 OCuLink graphics docking station when purchasing the workstation for an additional $99, which allows the system to drive up to four 8K screens simultaneously.

Staying cool

The mini PC supports up to 16TB of storage via four SSDs using PCIe 4.0 M.2 slots. There are five USB Type-A ports, a USB4 port capable of 40Gbps, the OCuLink interface, and dual Ethernet RJ45 ports supporting up to 2.5Gbps each.

For display outputs, the device includes HDMI 2.1 and DisplayPort 2.0 connections, with the USB4 interface also supporting screen output. Without an eGPU, it can still drive three 8K displays. For wireless connectivity, the Minisforum MS-A1 offers WiFi 6E and Bluetooth 5.2.

The mini PC's housing is compact and constructed from a mix of metal and plastic. The Cold Wave cooling system, featuring dual fans and quad heat pipes, prevents overheating even when under load.

With customizable CPU options and affordable eGPU support, the Minisforum MS-A1 offers a flexible, mini PC solution that is ideal for users seeking a compact yet powerful workstation for content creation, multitasking, gaming, or general productivity.

You might also like



from TechRadar - All the latest technology news https://ift.tt/iH7WC2M

Saturday, December 14, 2024

Traeger Is Cooking Up Delicious Savings of Up to $300 in Time for the Holidays

Give the budding chef in your life the gift of a sturdy Traeger grill at substantial savings this holiday season.

from CNET https://ift.tt/Jv4jg2l

Latest Tech News

The GPD Pocket 4 is an 8.8-inch laptop weighing just 770g, that is designed to combine portability with powerful performance.

GPD likens its aesthetic to that of an Apple MacBook, highlighting its sleek, lightweight build, which is small enough to carry like a mobile phone.

The Pocket 4 is powered by an AMD Ryzen AI 9 HX 370 processor with Radeon 890M/880M graphics (there’s also the option for an Ryzen 7 8840U CPU with 8840U graphics). It features a high-resolution 2.5K LTPS display with a 144Hz refresh rate and 10-point touch functionality. Its proprietary T-shaped hinge allows the screen to rotate up to 180 degrees, enabling it to be used as a tablet.

Choose your own ports

The Pocket 4 comes with up to 64GB of high-speed LPDDR5x memory and up to 2TB of PCIe Gen4 NVMe SSD storage. It sports a full-function USB-C port, USB4, HDMI 2.1, and an RJ45 network port. Wireless connectivity comes in the form of Wi-Fi 6E and Bluetooth 5.3. The device includes a 5MP front-facing camera, a QWERTY backlit keyboard, and a 45Wh battery supporting 100W PD fast charging.

Pricing for the GPD Pocket 4 starts at $829 for the model with the 8840U CPU, 16GB of RAM, and 1TB of storage.

The top-tier configuration with the HX 370 CPU, 64GB of RAM, and 2TB of storage is priced at $1,335. The Pocket 4 also supports a range of additional modules, allowing you to customize it to your needs. An RS232 port is available for $14, a single-port KVM for $48, and a 4G LTE expansion module for $110. There's also a microSD card reader with UHS-I support.

Earlier in 2024, GPD introduced the Duo, a $2,000 laptop featuring the world’s fastest mobile CPU, an OCuLink connector, and dual 13.3-inch OLED displays that are able to mirror, extend, or function independently.

That product marked a departure from GPD's usual lineup of compact gaming laptops and handheld consoles, but the company is returning to its roots with its latest creation.

The Pocket 4 is currently crowdfunding on Indiegogo, and while it offers an impressive array of features and modular options, potential backers, as always, should be aware of the risks associated with crowdfunding. Delays, changes to specifications, or project cancellations are possible, although GPD does have a proven track record of delivering backed products.

You might also like



from TechRadar - All the latest technology news https://ift.tt/mXnclwz

Friday, December 13, 2024

Best Deals on Earbuds and Headphones: Jam Out and Save $150 on Apple, Sony, Beats and More

Whether you're holiday shopping for a loved one or for yourself, audio gear is a great choice, especially with these hard-to-beat deals.

from CNET https://ift.tt/gQNU3EA

Latest Tech News


  • Qualcomm talks 6G innovations beyond speed, integrating AI and IoT
  • 6G promises enhanced coverage, and efficiency
  • AI-native design will optimize networks and enable new use cases

The transition from 5G to 6G is set to redefine the wireless landscape, offering advancements that go far beyond speed and connectivity.

Qualcomm, a key player in wireless innovation, is building on its 5G legacy to explore the possibilities of 6G, which is expected to integrate artificial intelligence, advanced IoT applications, and seamless connectivity between terrestrial and non-terrestrial networks.

Targeted for deployment in the 2030s, 6G promises to unlock new opportunities across industries and address the growing demands of an increasingly connected world.

In an exclusive interview with TechRadar Pro, John Smee, Global Head of Wireless Research at Qualcomm, discussed the future of 6G, outlining how the company is looking to build upon the advancements of 5G and 5G Advanced.

He also highlighted Qualcomm's role in contributing to the research and development of the technology, explaining that 6G will not only enhance key performance indicators like coverage, capacity, and efficiency but also enable transformative use cases such as digital twins and edge computing.

What are the key technological advancements in 5G that are paving the way for 6G development?

There are quite a few key advancements in 5G and 5G Advanced that are paving the way for 6G. Here are just a few examples:

  • Air interface foundation: we believe 6G will build on the OFDM foundation, with a focus on improving coverage, spectral efficiency, and capacity in both legacy FDD and TDD bands as well as new spectrum.
  • MIMO/duplex evolution: 6G Giga-MIMO will enable new upper midband spectrum (6-15 GHz) delivering additional wide-area capacity and reusing the existing 3.5 GHz macro cell sites and backhaul. Evolution to full duplex can deliver better coverage and flexibility to meet growing data demand.
  • Wireless AI: 5G Advanced kickstarted the era of AI in wireless, improving network/device performance and efficiency. AI will be an integral part of the 6G system design, with AI-native protocols across multiple layers of the stack.
  • Wireless sensing: the 5G-Advanced study of integrate sensing and communication (ISAC) can complement positioning to make the wireless network more efficient and open new business opportunities for the ecosystem.
  • Integrated TN/NTN: 5G introduced 5G non-terrestrial networking (NTN) by enabling satellites to deliver global coverage leveraging the cellular standard and modem implementations. 6G is expected to build on this foundation to support a seamless interworking of terrestrial networks and NTN.

How do you see the transition from 5G to 6G impacting businesses, and are there specific industries that will benefit the most?

The transition from 5G to 6G is expected to significantly enhance wireless connectivity improving fundamental KPI’s for coverage, capacity, and performance while enabling new services like AI, sensing, and digital twins. 6G will be designed to meet the increasing data transfer needs of connected AI-powered devices. Targeting 2030 deployment, 6G can efficiently enable intelligent computing everywhere creating new opportunities for value creation at the edge. Industries such as healthcare, manufacturing, transportation, and education will continue their transformations to leverage connected AI and the enhanced capabilities of 6G.

Can you explain the role of AI, and specifically Generative AI, in enhancing 5G networks and its potential impact on 6G?

AI is poised to significantly enhance 5G and 6G system performance, operational efficiency, and user experiences, as well as unlock new use cases at scale. For instance, by leveraging AI for network optimization, predictive analytics, and automated configuration, these networks can achieve greater efficiency, reliability, and security. Generative AI can simulate various network scenarios and create synthetic data to train machine learning models, ensuring robust network performance even in complex environments. These technologies enable advanced applications like real-time edge computing, personalized services, and seamless integration with a wide range of devices. Generative AI will also often be implemented on the device and as applications expand this will increase the 5G and 6G communications data demand on uplink and downlink.

AI native is intended to make the system perform better by either replacing functional blocks with AI implementations, or allowing AI to better manage the protocol, network node, device, etc. so that it can adapt more flexibly to support a larger variety of enterprise and consumer experiences. The AI native paradigm can give more implementation flexibility and bring more innovation and differentiation to the devices and networks.

AI native can be in at least the two following forms:

  • Replacing existing functionality with AI – e.g., there are a number of use cases in 3GPP (beam management, CSI feedback, positioning, mobility) that are being studied to see if there is a better solution with AI. One aspect of AI native is to include more of these features across layers, protocols and network/device for improved performance with AI. Especially relevant is work in 3GPP and ORAN to improve network automation with AI and the associated use cases. Cross node AI is also a potential example of this where the function is replaced by AI at the network and device.
  • Enable AI as a part of the protocol behavior – to change the actual protocols to be defined to be AI friendly so that the protocol can adapt to the combination of radio, device and application state to determine how best to serve the traffic. This changes how the function operates to incorporate AI.

What are the expected benefits of 6G over 5G in terms of speed, latency, and connectivity?

6G will not just be designed to achieve higher speed and lower latency, but it will also focus on bringing significant efficiency enhancements to capacity, coverage, energy consumption, and deployment cost. Additionally, 6G will focus on enabling faster deployment of new services and growing the surface area of operator opportunities. The focus will be driven by use cases to create new value for the broader wireless ecosystem and society.

How will 6G technology influence the development of IoT and generative AI technologies?

6G will bring an integrated design for eMBB and IoT with shared objectives of enhanced connectivity, extended coverage, added functionalities such as positioning and sensing that allow the devices to interact more effectively with their environment, and add more use cases of IoT. Ambient IoT, which will operate without batteries using energy harvesting techniques, will help proliferate low cost IoT sensors and further integrate the physical and digital worlds. Networks and devices will support real-time AI processing and decision-making at the edge, creating value for IoT applications independent of centralized cloud systems.

How is Qualcomm contributing to the research and development of 6G technology?

Qualcomm has a storied heritage in wireless technology, including groundbreaking innovations in 5G technologies. We are building on a strong foundation to advance connectivity across all technologies including 5G Advanced, 6G, Wi-Fi, Bluetooth and more. We are leading the ecosystem in technology research and development, working closely with industry technology leaders such as mobile operators, OEMs, and academia to bring future innovations to life.

How do you envision the future of mobile communication evolving with the advent of 6G?

The future of mobile communication with the advent of 6G is envisioned as a continuum that builds upon the advancements of 5G, focusing on integrating AI into networks and devices. 6G aims to enhance the efficiency and economics of existing and new use cases in the 2030s, such as multi-device plans, fixed wireless services, AR glasses, self-driving cars and elderly-care service robots. The evolution will also involve integrated sensing and communication, enabling new solutions like digital twins and RF sensing. Additionally, 6G will leverage existing infrastructure to provide cost-effective upgrades in existing spectrum on uplink performance and edge data processing, as well as add significant capacity in new spectrum.

You may also like



from TechRadar - All the latest technology news https://ift.tt/o2f87C0

Latest Tech News

A patch was released to fix the bugs and users were forced to update their PIN codes. from Latest from TechRadar https://ift.tt/WpfjTPR