Thursday, March 22, 2007

Raytheon Develops World's First Polymorphic Computer

EL SEGUNDO, Calif., March 20, 2007 -- The world's first computers whose architecture can adopt different forms depending on their application have been developed by Raytheon Company (NYSE: RTN).

The architecture of the MONARCH processor with key elements identified
The architecture of the MONARCH processor with key elements identified

Dubbed MONARCH (Morphable Networked Micro-Architecture) and developed to address the large data volume of sensor systems as well as their signal and data processing throughput requirements, it is the most adaptable processor ever built for the Department of Defense, reducing the number of processor types required. It performs as a single system on a chip, resulting in a significant reduction of the number of processors required for computing systems, and it performs in an array of chips for teraflop throughput.

"Typically, a chip is optimally designed either for front-end signal processing or back-end control and data processing," explained Nick Uros, vice president for the Advanced Concepts and Technology group of Raytheon Space and Airborne Systems. "The MONARCH micro-architecture is unique in its ability to reconfigure itself to optimize processing on the fly. MONARCH provides exceptional compute capacity and highly flexible data bandwidth capability with beyond state-of-the-art power efficiency, and it's fully programmable."

In addition to the ability to adapt its architecture for a particular objective, the MONARCH computer is also believed to be the most power- efficient processor available.

"In laboratory testing MONARCH outperformed the Intel quad-core Xeon chip by a factor of 10," said Michael Vahey, the principal investigator for the company's MONARCH technology.

MONARCH's polymorphic capability and super efficiency enable the development of DoD systems that need very small size, low power, and in some cases radiation tolerance for such purposes as global positioning systems, airborne and space radar and video processing systems.

The company has begun tests on prototypes of the polymorphic MONARCH processors to verify they'll function as designed and to establish their maximum throughput and power efficiency. MONARCH, containing six microprocessors and a highly interconnected reconfigurable computing array, provides 64 gigaflops (floating point operations per second) with more than 60 gigabytes per second of memory bandwidth and more than 43 gigabytes per second of off-chip data bandwidth.

The MONARCH processor was developed under a Defense Advanced Research Project Agency (DARPA) polymorphous computing architecture contract from the U.S. Air Force Research Laboratory. Raytheon Space and Airborne Systems led an industry team with the Information Sciences Institute of the University of Southern California to create the integrated large-scale system on a chip with a suite of software development tools for programs of high value to the Department of Defense and commercial applications. Besides USC major subcontractors included Georgia Institute of Technology, Mercury Computer Systems and IBM's Global Engineering Solutions division.

Raytheon Space and Airborne Systems is the leading provider of sensor systems giving military forces the most accurate and timely information available for the network-centric battlefield. With 2006 revenues of $4.3 billion and 12,000 employees, SAS is headquartered in El Segundo, Calif. Additional facilities are in Goleta, Calif.; Forest, Miss.; Dallas, McKinney and Plano, Texas; and several international locations.

Raytheon Company, with 2006 sales of $20.3 billion, is an industry leader in defense and government electronics, space, information technology, technical services, and business and special mission aircraft. With headquarters in Waltham, Mass., Raytheon employs 80,000 people worldwide.

(c) www.shoutwire.com

Tuesday, March 20, 2007

DRAM prices continue to plummet

Mark LaPedus

SAN JOSE, Calif. — Prices for DRAMs continue to plummet, as the tags for mainstream devices have fallen by a staggering 44 percent since the beginning of 2007, according to a report from Gartner Inc.

Average DRAM spot prices across all densities were down 6.5 percent for the seven-day period ended March 16, compared to the previous period, according to Gartner. Average spot prices stood at $3.67 on a 512-megabit basis for the period, down 39 percent since the beginning of 2007, according to the firm.

Prices for mainstream 512-Mbit DDR2-based chips are down 44 percent since the beginning of this year. ''Ample supply in the market and little fear of a tightening of supply gave the overall market a negative outlook,'' said Andrew Norwood, an analyst with Gartner.

At the beginning of February, the DRAM market crashed, as average selling prices (ASPs) had already fallen by 30 percent since the beginning of this year. DRAM ASPs were projected to fall by 30 percent for the entire year.

At that time, vendors insisted that the DRAM free-fall was temporary, claiming that a rebound is due in the second half of 2007, thanks in part to Microsoft's Vista operating system software.

In general, the memory market is lousy. The NAND flash market is also "brutal," according to Intel Corp. Some believe the ASPs on NAND chips will decline 65 percent this year.

(c) www.eetimes.com

Uni.PC Tags: ,

Thursday, March 15, 2007

NVIDIA GeForce 8600-Series Details Unveiled

by Anh Huynh

NVIDIA prepares its next-generation mid-range and mainstream DirectX 10 GPUs

Earlier today DailyTech received it's briefiing on NVIDIA’s upcoming GeForce 8600GTS, 8600GT and 8500GT graphics processors. NVIDIA’s GeForce 8600GTS and 8600GT are G84-based GPUs and target the mid-range markets. The lower-positioned G86-based GeForce 8500GT serves as the flagship low to mid-range graphics card.
The budget-priced trio feature full support for DirectX 10 features including pixel and vertex shader model 4.0. NVIDIA has yet to reveal the amount of shaders or shader clocks though. Nevertheless, the trio supports NVIDIA SLI and PureVideo technologies.


NVIDIA GeForce 8600GTS

 


NVIDIA GeForce 8600GT

NVIDIA touts three dedicated video engines on the G84 and G86-based graphics cards for PureVideo processing. The video engines provide MPEG-2 high-definition and WMV HD video playback up to resolutions of 1080p. G84 and G86 support hardware accelerated decoding of H.264 video as well; however, NVIDIA makes no mention of VC-1 decoding. G84 and G86 also feature advanced post-processing video algorithms. Supported algorithms include spatial-temporal de-interlacing, inverse 2:2, 3:2 pull-down and 4-tap horizontal, and 5-tap vertical video scaling.
At the top of the mid-range lineup is the GeForce 8600GTS. The G84-based graphics core clocks in at 675 MHz. NVIDIA pairs the GeForce 8600GTS with 256MB of GDDR3 memory clocked at 1000 MHz. The memory interfaces with the GPU via a 128-bit bus. The GeForce 8600GTS does not integrate HDCP keys on the GPU. Add-in board partners will have to purchase separate EEPROMs with HDCP keys; however, all GeForce 8600GTS-based graphics cards feature support for HDCP.
GeForce 8600GTS-based graphics cards require an eight-layer PCB. Physically, the cards measure in at 7.2 x 4.376 inches and available in full-height only. NVIDIA GeForce 8600GTS graphics cards feature a PCIe x16 interface, unlike ATI’s upcoming RV630. GeForce 8600GTS-based cards still require external PCIe power. NVIDIA estimates total board power consumption at around 71-watts.
Supported video output connectors include dual dual-link DVI, VGA, SDTV and HDTV outputs, and analog video inputs. G84-based GPUs do not support a native HDMI output. Manufacturers can adapt one of the DVI-outputs for HDMI.
NVIDIA’s GeForce 8600GT is not as performance oriented as the 8600GTS. The GeForce 8600GT GPU clocks in at a more conservative 540 MHz. The memory configuration has more flexibility, letting manufacturers decide between 256MB or 128MB of GDDR3 memory. NVIDIA specifies the memory clock at 700 MHz. The GeForce 8600GT shares the same 128-bit memory interface as the 8600GTS. HDCP support on GeForce 8600GT is optional. The GPU and reference board design support the required HDCP keys EEPROM, however, the implementation is up to NVIDIA’s add-in board partners.
GeForce 8600GT-based graphics cards only require a six-layer PCB instead of the eight-layer PCB of the 8600GTS. The physical board size is also smaller too – measuring in at 6.9 x 4.376 inches. GeForce 8600GT-based cards do not require external PCIe power. NVIDIA rates the maximum board power consumption at 43-watts – 28-watts less than the 8600GTS.
The GeForce 8600GT supports similar video outputs as the 8600GTS, however, the 8600GT does not support video input features.
NVIDIA has revealed very little information on the GeForce 8500GT besides support for GDDR3 and DDR2 memory. It supports dual dual-link DVI, VGA and TV outputs as well.
Expect NVIDIA to pull the wraps off its GeForce 8600GTS, 8600GT and 8500GT next quarter in time to take on AMD’s upcoming RV630 and RV610.

(c)  www.dailytech.com

Tag Cloud