Visibility Enhancements – Novas
[ Back ]   [ More News ]   [ Home ]
Visibility Enhancements – Novas

Introduction

I have written several articles about verification. This is clearly a very broad topic. An important niche within verification is debug. The leader in this niche is Novas. On March 6th Novas Software, Inc. announced its new family of Siloti Visibility Enhancement (VE) products to address the growing, costly problem of decreasing visibility into the functional operation of complex ICs during late stage verification and system validation. The Siloti products break through the barriers of limited signal observability in near- and post-silicon applications with new patent-pending visibility enhancement technology that improves simulation, emulation, first-silicon prototype and silicon debug methodologies. I had an opportunity to discuss this with Scott Sandler, President and CEO of Novas, before the press announcement.

Would you give me a brief biography.
I went to the University of Massachusetts in Amherst where I grew up. In 83 I made the big leap to the west coast working at Intel in Oregon where I was a verification engineer. I only did that for 2½ years because I got recruited for this cool little company called Gateway Design Automation. I was the first application engineer for the Verilog language and simulator which was probably the best thing I could have done in my career. A terrific opportunity and really a fun time! And of course we changed the world. I stayed at Cadence for 2 years and then tried, as many of us do, leaving the industry for over a year. I went into consulting for a while. One of my clients was Chrysalis Symbolic Design. I ended up working in formal verification at Chrysalis for 5 years. One of those years I spent in Japan. I left shortly after the Avanti merger to come to Novas. I've been here ever since. That was in the fourth quarter of 99.

In the interest of disclosure I went to graduate school at UMass in Amherst more than a few years before you. I will spare the readers form our walk down memory lane

Tell me a little bit about Novas.
Novas is focused on the engineer's capability to comprehend complex designs. We think of ourselves as accelerating engineers, really focused on the human part. You know, it is funny in the design process especially in EDA there is a lot of focus on tools that run real fast. They are kind of always in the background but what are the engineers really doing? We're focused on the part where the engineers are actually sitting in front of the tube and trying to do stuff, figure things out. That makes us a little different. Of course, our primary product offering for engineers has been debug. That's a pretty broad term it turns out. It is often associated with waveforms but we really busted that open and caused it to be considered much more broadly such that it is all the different things you do to try to figure out how a design works and why it doesn't. Debug is often associated with the why it doesn't work part rather than how it does work. As we've seen more IP being incorporated into these new SoCs, there's a lot of figuring out how these thing work. How am I supposed to hook it up? What do they do here? That's a big part of our work as well. Now we are broadening that out with a whole new range of comprehension products that we call visibility enhancements (VE) that sits side by side with debug to really fix a major problem in verification methodology that slows everybody down and has everybody running around like chickens with their heads cut off whenever a bug is detected by the testbench. There is simply not enough visibility to figure out what it is doing. Is it doing the right thing or not?

Your website talks about DFD or “Design for Debug”. There is a lot of talk about designing for this or that. I call it “design for the 'ilities' like manufacturability, testability, ….”
It's kind of a buzz phrase, isn't it. I apologize for tagging onto that. It seems to get people's attention. That's what we do in marketing.

How does DFD differentiate from verification or testing?
It's interesting that phrase “design for debug” actually refers to something quite specific that is in fact related to our new product offering. Over the years we have thought about whether there is a design for debug in the general sense. Our conclusion was no. You pretty much design for everything else and debug needs to pick up the pieces so that you can comprehend what you have done. We didn't actually coin this term design for debug. It's a phrase that exists in the realm of silicon debug, another new buzz phrase that's related to our visibility enhancement line. It turns out to be just a small part of what we are doing in visibility enhancement because VE works across the whole flow. It's funny because we actually started thinking about visibility enhancement with respect to silicon debug and thus design for debug. Let me just take this hierarchally. Silicon debug is that part of the process where you are trying to figure out whether your actual silicon is working or why it is not. You plug it into your prototype system board. You boot up some software and run some specific diagnostics. If it doesn't do what you expect, then you have to debug the silicon. You are thinking about this first silicon off the line as a simulation of itself in a way. You need to figure out why it is doing what it is doing. There are other elements to that. So for example, you may be thinking about how to improve yield by looking at the failures in your chips even though most of them are working. That's also silicon debug. There's another term silicon diagnosis. The big issue with that is lack of visibility because it is very difficult to get stuff off the chip. One of the things people do is design for debug which means actually doing something in the logic to make it easier to get signal values out of the chip during that special phase or even in the field. If you have field failures and you want to understand what has gone wrong and whether it is in the chip. Silicon debug helps you with that too. You might think of it as design for debug logic. What do you have to put on the chip to make it possible to debug when you actually have silicon in front of you? There are a couple of companies specifically focused on this who are our partners. One of them is DAFCA, another is Epic II recently acquired by MIPS. There are design for debug initiatives and technology inside many major semiconductor vendors, an emerging area.

Would you give me an overview of the two existing product lines: Verdi and Debussy?
Debussy - you may have noticed the theme of composers - is the moniker for our debug system. The name happens to sound like debug system. We picked that motif and we extended that with Verdi and now Siloti. Debussy is really an industry standard. If you talk with anybody doing design and verification, they know Debussy well and probably have used Debussy and probably loved it. Simply the best debug tool ever created. It involves waveform viewing, source code viewing and tracing, schematic viewing which means drawing a picture of what's represented in the HDL and also state machine viewing - drawing bubble diagrams based upon what the user has expressed in the HDL. RTL and gates hook up with all the simulators and all the verification tools in the industry. We have something like 40 partners. We are integrated with all the major EDA vendors' tools. Customers insist in these integrations from us and from Cadence, Mentor and Synopsys. It is used by thousands and thousands of engineers. Debussy is owned by virtually every semiconductor company in the world. I can't name one firm that doesn't have any. It's been around since 1997, believe it or not. It's almost 10 years. I won't say old because it has been constantly refreshed to work with the latest languages and the latest simulators. We've reengineered the databases to deal with today design sizes. It is a constant software engineering process to keep a product like this up to date. People will find if they talk to anybody that it is up-to-date and in fact is not long in tooth.

However, there is a bunch of advanced technology that we have developed that is in addition to what is in Debussy. We have put that into our current flagship debug product which we call Verdi. Verdi is a superset of Debussy. In incorporates all of Debussy's capabilities plus it adds a bunch of new stuff, namely debug automation which means using formal analysis techniques to automatically trace from some behavior, some effect that you observe back to its cause. The idea of debugging, trying to figure design behavior is really about working on an effect and tracing back to its causes. We've automated a big part of that using some formal analysis techniques. In addition we have incorporated into Verdi testbench debug which means that the Vera and e languages are incorporated and the ability to trace across the boundary between HDL and HVL which I believe is unique to Verdi as well incorporating solution level debug, incorporating PSL, SVA and even OVA, and integrating the concept of viewing the results of assertions in the simulation. Say you have assertions and they fire, you can see that in the waveform. You click on that and you get the source code and you go back to the design source. It's naturally integrated. Verdi even has the ability to do post simulation assertion checking based solely on the signal dump without having to rerun the simulator which means you can add new assertions and check them leveraging that valuable resource which is your signal value dump. Verdi is a superset. It is our flagship. It's what we currently sell the most of. There are lots of Debussy licenses out there. Many people are upgrading to Verdi as the problems get more complex.

Associated with Verdi are also two what I call helper modules. One is called nESL. Again I apologize for latching onto an industry buzz phrase. ESL means a lot of different things to a lot of different people. I think that there are three common elements in ESL. One is SystemC, one is transactions and the other is hardware/software. The nESL module lets Verdi work at the higher level of abstraction by adding those three capabilities, basically treating SystemC as an HDL, transactional analysis being able to view and analyze the transaction flow and also being able to work together with software debuggers. At the other end there is a thing that works at the net list level when you are doing static analysis, timing, power and clock analysis. That is called nAnalyzer. It helps Verdi work at that level of abstraction for those specific netlist closure tasks.

What is the price range for Verdi and Debussy?
There's a whole matrix of annual and perpetual prices but in general Verdi has a list price of around $10K a year.

What can you tell me about Siloti, the newly announced product line?
The news is that we have announced a whole new product line, really a new space, that we are calling visibility enhancement. The key point is that you get full visibility with a partial signal dump. I will get into why that is important right away. It helps you debug the results from lower level representations such as netlists that are in FPGAs and emulators at the familiar RTL level. There are new technologies of course to analyze the design and to figure out which signals you need to expand the data into full visibility and to correlate between where you execute and where you want to visualize. We've packaged that into two application specific products called SimVE and SilVE. They are tightly integrated with Verdi.

I want to say something a bit audacious. The verification methodology is fundamentally broken. Not so much that it is too slow or that you can't express things more completely. You have great languages, great simulators and all kinds of choices with respect to simulators and testbenches. All this stuff is really quite refined and mature. The problem comes when one of these testbenches detects that there is a mismatch. At that point the flow just breaks because there is no data. In order to run at full speed you basically run without any dumping. That's true of simulation. It's true of emulation. It's true of prototypes. You just want to get to the point where you detect as quickly as possible but then of course you have to debug. The debugger is just sitting there waiting. It's waiting to get fed like a little bird in a nest. You have to feed it signal values. You say “How am I to get out the signal values?” and “How do I know which ones?” People go through this manual process. Again we are talking about accelerating engineers. What we visualize is that there are engineers sitting around asking themselves “Which block should we dump?” Because if you turn on full dumping, dump every signal on every value change, you will have the multi gigabyte files. You may be able to hold onto those but it gets old fast. Not only that but the whole process just slows down because it is dominated by the extraction of the values and writing them onto the disk. There has been work done on compression and trying to make this dumping more efficient. We have been at the forefront of that with our FSDB format; working with all the simulation vendors. But it has just become untenable to do that simply by software engineering. We had to figure out a way to change the game. That's why we focused on visibility enhancements. The verification part runs fast or good. The debug part is in good shape. It's tying them together where we see a huge hole. It's real expensive to get that data out. That causes many, many iterations. You say lets dump that block then you find out that there is not enough data. Then you go back to get the rest of the data and so on.

There's a bunch of benefits from these products. We address the problems. You get better comprehension, verification cost savings and optimization of expensive resources such as emulators.

This fits right in side by side with Verdi, because it is all in the name of comprehension. It is all based upon our open system platform with its design data base and its signal data base. They work together. You can feed Verdi directly from the simulators or you can feed it through Siloti in order to improve your verification methodology. The two products are application specific because there are some subtle difference between what you need in simulation and what you need in the silicon arena.

Inadequate visibility hampers verification. There is low impact to dumping everything when you are just simulating a module but it gets worse and worse when you go through the flow. When you are doing full chip regression you are generally running without dumping and if there is an error flag and you have to go back, you have access to everything but boy is it expensive to dump everything. So you generally try to be selective. This leads to a big thought process and a lot of iterations. In emulation people are trying to run as many emulations through as they can. They have these multi-user boxes. If you start instrumenting a lot of stuff, the image grows. Now it won't fit in the box or only one will fit where five used to fit. It's that kind of stuff. When you are in the final stages of silicon validation we talked about design for debug. What that means is having to think about where am I going to put extra logic on the chip and how am I going to route it to the other side? That's quite an involved process. How can we help with all that? Visibility enhancement optimizes verification and validation process by reducing the impact of observing the things you need to figure out how it works or why it doesn't. You have to make this tradeoff. Am I going to dump everything and slow it way down or am I going to dump nothing and run real fast? How can I split the difference there? If we look at how it works. Utilizing RTL or gate source files it analyzes the design to figure out which signals are essential. What is essential is determined by what the expansion engine needs to give you full visibility. These things are tied tightly together. Again it is a little different for simulation and for silicon. Once you analyzed, you get what you need for wherever you are in that validation flow. You either get a dump list for the simulation. You get help figuring out what to instrument in your FPGA prototype or your DSP. You get probe list to use with your emulator. Basically, that's going to produce out of one of these things a partial dump file, a subset of the signals, but because you have Siloti you can still get full visibility because when you want to look at a value at a particular time in Verdi, the expansion engine will be invoked on the fly. You get on demand expansion. There is never a process of doing some sort of batch process to try to get back all the signals. You just expand the things you are actually going to look at. That's the other thing about these huge dump files. Let's say you have dumped everything. You would actually look at only a tiny fraction of those signal values. We figured out a way to do it smart - to use an analysis, an analytical approach, to dump so you can get back full visibility. That's going to transform the verification process because you shouldn't have to think about what to dump or whether to dump. You should be able, most of the time, just to dump that essential file and be ready to debug immediately.

There are two new produces SilVe and SumVE. SilVE includes very rich visibility analysis capability to help you figure out what to instrument and data expansion capability that works with that analysis. Those are joined together. Moreover this data expansion in the silicon world works with what we will call extraction correlation. The expansion is actually done in the context of the RTL. When you run your emulator of course it is a gate level netlist.

SilVE is basically a subset because it includes a simplified data expansion and visibility analysis. There is no need for the correlation here so the expansion is simpler and the analysis is simpler because you are debugging at the same level that you are verifying.

To summarize we see design comprehension as a major bottleneck because it involves the engineers' personal time. You are paying somebody to try and figure out. It's a hidden problem here because you know that engineering management is going to say “That's what engineers do!” We accelerate that work. You can use them better. They can spend more time on more value adding propositions like adding features to the chip or writing testbenches. If you talk to engineers personally, they say “We love your tool because we can go home and spend time with our family instead of having to sit in front of the tube all day Saturday.” We have augmented our comprehension solution offerings by adding these visibility enhancement products side by side with our debug and debug automation tools. VE products are much higher value. They change the methodology. So they list for $50K a year each.

When do you plan to announce the products?
March 6th. We have been working with customer for over a year. These things are in production. We have actually sold them.

Are any partners making announcements at the same time?
Yes. EVE, DAFCA, ProDesign, and First Silicon Solutions (FS2) will announce integration with Siloti SilVE at that time.

How many firms were involved with the prerelease?
Tens of engagements! I would like to avoid a specific number but more than 30.

Is there a sweet spot for your product?
I would say anytime it is big and complex. That sort of implies things like processors, graphics, network on a chip, large SoCs. It is real interesting because some things aren't as big but change value a lot. So the dump files get very, very large. There is also design methodology question. How familiar is the design? Are you using a lot of unfamiliar IP? Certainly our debug tools have a lot more value. We think Siloti will bring a lot more value there because there is a greater need for data. If you get the same people who have done the same design and now they have incorporated it with some other IP that they are also familiar with, the comprehension problem isn't as big. It has to do with the complexity and familiarity. But I think size is the main indicator. But of course process node drives size. The usual suspects.

Using third part IP cuts down development time but lack of understanding of the functionality can be a problem. How does your product address that?
By making it easier to figure it out. This is another one of the hidden problems in the industry. IP speeds up certain parts of it which is design. You don't have to do the design from scratch. But you still have to try and understand how that IP works. It turns out that when you get IP, it doesn't always do exactly what you thought. It is subject to spec interpretation differences. There may be some bugs. The guys who bought the IP thought it had this feature and it worked just like this but it really didn't. There is a big process to understand IP in order to bolt it together with your own stuff.

If there is a bug, you now have this SoC that is comprised of your own stuff plus a bunch of diverse IP and it does not do something. You've got all these pieces hanging, maybe there an internal bus. Alright, where does the problem originat?. You have this complex flow now. This part does one thing and the other part does another. There is something from the outside and then it responds. Where did it go wrong? It could be thousands of cycles back and all kinds of interdependencies could have been involved.

In software debugging you can dynamically change a value and proceed to see the impact. Is there any similar capability?
There are ways to do that in the simulator, for example, forcing values. It's not quite as flexible as in the software world because everything is hardwired. It depends upon the design whether changing a value is going to allow you to figure out what the right answer should have been. On the other hand and this is not part of our announcement, the Siloti technology goes beyond the announcement. There are more application specific products coming down the pike. Those will be deeper into the silicon and even into the test area where you are on the tester and are correlating the test results back to the results you got in simulation. There we have a technology for doing exactly what you said, doing what-if. We call it that in order to try to confirm that a cause you have observed is actually causing the mismatch.

The answer is yes. But it is fundamentally different from the software world.

Who do you see as possible competition for this new product line?
There is nothing overlap with Siloti at this point, nothing obvious to me. The traditional competition for Verdi is no so much standalone debuggers. We are the only one generating significant revenue from that niche. The competition is big. It's what's bundle with the simulators. Cadence, Synopsys and Mentor all have captive debuggers. There are two fundamental differences. One is that they only work with one simulator. They do not do anything to help you unify the design comprehension process from systems to silicon which is our mission. They also are not able to focus the technology and product development resources on this area because they have no one to pay for it. I think that value comes from focus, from really being dedicated to achieve this part of the flow. So if you compare technical apples to apples you will see that there is really a significant productivity difference with Verdi. That's' why we have a business. It is as simple as that. There is a place for simple debug tools at far less cost. And some people use those tools just brilliantly. That's why we are not bigger. Every market has its right size. So our job is to build it as good as we can, so that it satisfies as many needs as possible. Of course there is always the return on investment issue. We could add a feature but would we get paid for it? That's just product marketing. It is clearly differentiated with the capabilities that it has that it competes with.

How much revenue does Novas have?
We are privately held. We do not announce specific revenue numbers. The world wide debug market is in the second tier. We are smaller than Magma but in the same league as Synplicity.

Editor: Synplicity had $62 million in sales revenue during 2005

Who owns Novas? Founders, venture capital, EDA or semiconductor firms, ..?
The investment is primarily strategic as opposed to traditional venture capital. We are in close partnership with a Taiwanese EDA company. There is investment from there but we basically bootstrapped the company. The amount of investment is much lower than other companies our size.

How do you sell, direct sales, distributors, VARs?
In partnership with our Asian partner we sell directly in most of the world. There are two sides. There's an Asian side and a European side. In Asia they run the channel and we run the channel in the US and Europe. We work together.

Would you expand on this relationship?
Novas Software is unique in that we have two headquarters - one in San Jose, CA and another in Taiwan. Our Asia headquarters is responsible for the sales and support functions throughout Asia, and our US headquarters is responsible for sales and support in the Americas and Europe. While we employ a direct sales and support channel in North America and Europe, in Asia, we have adopted a combination of direct channel and distributor methods. Some of our Asia distributorships are comprised of joint ventures between our Asia headquarters and regional professionals who have an investment in the operation. On the product development side, both our offices share in the effort. Each office works independently as well as collaboratively. It is this synergy between the two offices that has lead to our strong success in Asian markets when compared to a typical EDA company.

How is the revenue and customer base spread geographically?
We are stronger in Asia than the typical EDA company because we have a headquarters there. There is a traditional ratio and we do better than that because we are stronger in Asia. We are more balanced than other EDA companies.

What are those with access to the pre-released version saying about Siloti?
They are reporting substantial reduction in verification time. The debug time really comes from Verdi's capabilities. If you consider the debug time to be every thing that happens after you detect it which means rerunning the simulation, then absolutely yes.

If a designer is a current user of Verdi how long would it take for that designer to become proficient with Siloti?
It's interesting. From a use model standpoint things don't change much. We've worked with a bunch of customers and showed them that this is an effective tool. Whether I can quantify that in any meaningful way is probably questionable. It's straight forward. It's a matter of pushing the right buttons. After setting it up it just works and basically you use Verdi as you always did. Your verification process is more efficient because you are dumping less.

In every EDA tool there's a use side and a setup side. The setup side is often done by the CAD engineers, the support group, in partnership with our application consultants who help people get things setup and of course help if the customer has usage problems or questions. In general the setup side is more intense but of shorter duration. You use it all the time but you set it up only once per project.

Before becoming CEO of Novas you were in charge of marketing. You are now launching a new product lien. What do you anticipate will be the most effective marketing tool to make this a success?
PR! Really! The articles we contribute, the stories that are written about it, word of mouth, .. Word of mouth has been very important to us. Our customers like our products. They have ranked us number 1 in customer satisfaction several years, 4 years now, in a row in the annual E Times survey. I think customer experiences and what they say as well as the stories carried by journalists such as yourself. Those are the most effective tools for sure.

And of course, work with our customers. We have a very large installed base. We have direct relationships with all the semiconductor companies worldwide. That direct channel is extremely important. It's customer relations not just public relations. PR starts with the press release. When we talk about public relations, I would say that more important is the customer relations side. This means that our people are out there daily with the decision makers and the users of these kind of tools.



The top articles over the last two weeks as determined by the number of readers were

Sociology of Design & EDA -- DATE 2006 Keynote Presentation By Walden C. Rhines, Chairman & CEO, Mentor Graphics

Analog and Mixed-Signal Chip Design Gets More Productive with Tanner EDA's New S-Edit Schematic Capture Tool S-Edit is anticipated to reduce front-end design time, which typically is about 60 percent of the total design processS-Edit provides schematic capture, netlist input and output with automatic conversion of Cadence and ViewDraw EDIF schematics, and integrated analog simulation. Users can run simulations and cross-probe from within S-Edit, making design more efficient and real-time

EMA Expands Into IC Market as a Cadence Channel Partner EMA Design Automation, announced that its Cadence product sales portfolio has expanded to include products from the Virtuoso custom design platform, Incisive functional verification platform, and DFM technologies. A Cadence distributor since 1998, EMA's role has been expanded to allow EMA to offer pre-selected Cadence customers a broader portfolio of Cadence products fully supported by Cadence's customer support group.

CoWare Expands Senior Management Team With Addition of Two New Executives: Tim Smith and Mike Faust Tim Smith has joined the company as vice president of worldwide sales, reporting to CoWare president and CEO, Alan Naumann. And, Mike Faust has joined the company as vice president of North America and Europe sales, reporting to Tim Smith. Smith was most recently vice president of worldwide sales at Sonics, a start-up IP provider. Mike Faust was most recently vice president of worldwide sales at electronic design automation start-up, Reshape.

Agilent Technologies Introduces Next-Generation Parametric Test Software; New Version Delivers Seamless Laboratory and Development Environment for Parametric Test on Parameter Analyzers, Desktop PCs EasyEXPERT 2.0 provides an intuitive, task-oriented approach to semiconductor device characterization. It is used to test devices in non-production applications, such as process development, modeling, reliability and failure analysis. Agilent is also introducing Desktop EasyEXPERT 2.0, which allows users to quickly and easily develop application tests and perform data analysis on MS Windows-based PCs. EasyEXPERT 2.0 features several new capabilities, including a new quick test mode, an improved switching matrix control function for the Agilent B2200A and B2201A, and a new automatic data-export option.



Other EDA News

Flomerics announces FLO/PCB for Allegro, Offering Bi-directional Interface to Cadence Allegro PCB Editor

Rick Tumlinson Leads Keynote Series for 22nd Annual User2User Conference; Registration is Now Open

Simucad Releases LDMOS and HV MOS Compact SPICE Models

BOXX Launches GoBOXX 1400 Mobile Workstation With AMD's Athlon 64 X2 Dual-Core Processors

Sociology of Design & EDA -- DATE 2006 Keynote Presentation By Walden C. Rhines, Chairman & CEO, Mentor Graphics

CoWare Expands Senior Management Team With Addition of Two New Executives: Tim Smith and Mike Faust

Arteris Announces STMicroelectronics Use of NoC for Next Generation Wireless Infrastructure Platform; Pioneering Network on Chip Technology Delivers Higher SoC Performance While Reducing Design Cycle

MatrixOne to Host Seminars on Streamlining the Product Development and Regulatory Compliance Processes for Medical Device Manufacturers

ATI Deploys Synopsys' Star-RCXT for Silicon-Accurate Parasitic Extraction

LogicVision Announces Development of Standard Design and Test Environment With STARC

Tensilica Offers Free Diamond Core Software Development and Modeling Tools EDA Tech Forum Announces 2006 Worldwide Event Series

MatrixOne deployed at the heart of Faurecia's Core System; MatrixOne's PLM platform enables the automotive supplier to unite its international teams on a single data management system

Celoxica Debuts ESL Starter Kit for Xilinx Customers; Development Board and ESL Tool Combination Accelerates Adoption of C-based Synthesis for Xilinx FPGA Designers

Summit Design Delivers Vista-PE(TM) to Provide Advanced SystemC Debug and Analysis in an Affordable Individual User Package; Vista-PE Provides Powerful Debugging for Advanced Users While Simplifying SystemC Learning Curve for Novices

Summit Design Upgrades Its Membership in the Open SystemC Initiative to the Corporate Level; Assumes Driving Role in OSCI With Increased Investment and Commitment to SystemC

Athena Design Accelerates Path to Design Closure for Complex ICs With Next-Generation Optimization System

Other IP & SoC News

Micronas Introduces truD(R)HD for HDTV and Eliminates Motion Blur for Flat Panel TVs With 120 Images Per Second

Tatung Launches New Digital STB2000 Set-Top Box Series Based on Sigma Designs' SMP8634 Media Processor

IBM and Rambus Sign Technology License Agreement for Cell Broadband Engine(TM)-Based Processors and Companion Chips

Versatile Li-Ion Battery Charger Chip from STMicroelectronics Saves Space for Compact Applications

North American Semiconductor Equipment Industry Posts February 2006 Book-to-Bill Ratio of 1.01 Avago Technologies Introduces Miniature 802.11b/g Power Amplifier With Industry's Lowest Current Consumption

Lattice Semiconductor Announces Agreement to Settle Class Action Litigation

Atmel Introduces the World's Fastest Monolithic 12-bit Analog-to-Digital Converter

Mindspeed(R) First to Support Emerging Ethernet-over-DS3/E3 Protocol with Line Card-on-a-Chip Family

u-Nav Announces RendevU(TM), a True Monolithic GPS Single-Die Receiver

Ramtron to Showcase New Family of High Performance Microcontrollers at Embedded Systems Conference

Jazz Semiconductor Announces Volume Production for Power CMOS Processes; Growing Power Management Market Forces Designers to Find Foundry Processes Tailored for System Power Management and Power Control

ZSP Expands VoIP Solutions With Z.Voice SoC Sub-System and VoWiFi With AEC

SyChip Introduces VoIP Processing Engine for Mobile Terminals; Next Generation Module Incorporates VoIP Engine to Reduce Power, Footprint and Cost

Intel Boosts Energy-Efficient Performance With First Dual-Core Low-Voltage Intel(R) Xeon(R) Processor

Microchip Technology Introduces High-Efficiency, Low-Power and Low-Noise Charge Pump DC/DC Converters; Positively Regulated Devices are Among the Most Highly Efficient Charge Pumps in the Industry

PulseWave RF Completes Successful Demonstration of Industry's First Digital Power Amplifier Module for Wireless Base Stations

Fujitsu Introduces New 8-Bit Microcontrollers for Digital Audio-Visual Systems, Household Appliances

National Semiconductor's New Low-Noise CMOS Operational Amplifier Operates up to 24V

National Semiconductor's LVDS Buffer Features Industry's Best Jitter Performance and ESD Protection

Wide Input Voltage Range Enables Flexibility in Supertex's New Current Sensor

AMI Semiconductor Releases Miniaturized BelaSigna(R) 200 Audio Processing System; New Chipscale Device Optimized for Use in Advanced Small Form Factor Audio Products

Oxford Semiconductor Announces Industry's First Single-Chip Combination USB and Memory Host Controller; New Chip Improves Performance and Utilization; Supports USB and CE-ATA Ideal for Portable Applications

TI Unveils Cost-Optimized TPS40K(TM) DC/DC Controllers

Rim Semiconductor Closes $6 Million Financing; Company Also Reduces Current Liabilities by $845,000

Winbond Introduces the Industry's Highest Performance Serial Flash Memories

Freescale Announces Ultra-Low-End Core for New Entrants to 8-bit; RS08 Core Defines New Starting Point for Freescale Controller Continuum

Anchor Bay Technologies Sees HD Video Processing Success with AMI Semiconductor's FPGA-to-ASIC Conversion Technology; Speeds Time-to-Market and Reduces Cost

Xilinx Launches ESL Initiative to Accelerate Adoption of System Level Design for FPGAs

Denali Chosen by IBM to Deploy New Development Toolkits for Power Architecture(TM)

Mitrionics' FPGA Supercomputing Platform Now Compatible With Xilinx Virtex-4 Platform