Visual Architect and Development System for Architectural Exploration and Performance Analysis - CoFluent Design

 CoFluent’s Y Modeling Flow

I believe I saw where the product does not use TLM but another type of modeling.
Actually, out library is based on top of TLM. We have another layer on top of TLM but we are using the TLM concepts. We are providing a transactional modeling and simulation capability but at a different level than the one you find in other tools. Other tools are either cycle accurate or bus accurate. We have one abstraction level above. What we are doing is a type of message passing TLM, the equivalent of TL/3. Bus accurate is TL/2 and cycle accurate is TL/1. We are a TL/3 type of tool. What I call message passing is basically the functions in your model are just exchanging messages between each other in a very standard receive type of protocol. It provides very fast simulation and a lot of flexibility in architectural exploration.

How would you compare the CoFluent approach to the approach others have taken in the ESL space?
The others in the ESL space are co-simulating software with a model of the platform created by assembling various IPs, RTL or SystemC. For those tools you need IP for all the different parts. If you do not have those IPs, you have to write them. This takes time. If you do not want to write them, then you have to buy them from the vendors. That takes money. You also need an ISS model to execute the software code that you cross compile for the target processor. You need a lot of details and a lot of effort to get to you first simulation. At that point you do not have much flexibility left in terms of architectural exploration. You may be able to fine tune your platform here and there but the major decisions have already been taken. With CoFluent you do not need a single line of final software code. You do not have to bring in any RTL IP. We are not based upon ISS simulation. The system is very flexible. You can pretty much change any parameter in your system architecture very easily. The platform modeling is done in minutes or hours. The architectural exploration through the mapping tool is also done very easily through drag and drop operations, done in minutes. If for example you decide that instead of a function running in hardware IP or on a hardware accelerator, you want to move it to your CPU or a DSP, this can be done in 5 minutes. You do not have to reprogram anything or create a new model of any sort. It is just handled by the tool. The tool provides you the capability to explore the wide area of architectural possibilities.

After the user has finalized the decisions on what is to run on hardware and what is to run in software, what is the next step in the design process? How does CoFluent fit into the design flow?
What we have seen and what is our vision of system design is that it is not a sequential flow in the sense that you start with system level design and then go to hardware and software implementation. That is not how it works. In our opinion you have two parallel flows. You have the system level design flow that goes from the beginning to the end of the project. And then you have the software and hardware implementation flow that is done with the usual EDA tools and software development tools. We basically do not touch that flow with CoFluent. I would include in that flow any type of low level co-simulation tool that basically takes hardware RTL level platforms and co-simulates this with an ISS and software code. That in my opinion goes into the hardware implementation flow. With CoFluent the idea is to start very early with the first system architecting activity, right after the specification, to obtain the first executable specification and also the first test scenario, the first testbench for the particular system architecture that you define. Then to use this executable specification for driving the hardware and software implementation. Out of this you have a blueprint to help you do the hardware/software partitioning, to help you do block decomposition, to do software task decomposition, to analyze different interactions between different blocks and to have a blueprint for how to write the hardware and software components. We have the first specification for timing requirements as well as memory consumption and power consumption. We have the first views of CPU load, bus load in the system and the cost of everything. This becomes your reference model. But this reference model is not static, you do not forget about it until you reach the end of the project. You will continue to monitor, update and refine it along with the progress made at the implementation level. In the end you get a direct correspondence between the final hardware, the software system and the high level model of the system that you have created but at a high level description including very amplified requirements for timing and performance out of this you are now capable of extracting reusable components, system levels models into a library that you can use in the next project. Now when you start your next project, it will be easier and much faster to just take the components out of the library, assemble them and try new functionality. This is how we see the entire flow.

How did you verify that your modeling is correct?
We do have some cases where customers took existing designs, create a model and made measures. They got numbers within 5% of reality in terms of performance prediction as well as any type of timing requirement. This is very close. This is definitely at a high enough level to provide the capacity to make most of the decisions very early when you need to. Would it be hardware? Would it be in software? What types of load do I have and expect? What type of memory requirements?

When did the product first become available? What is the current version?
We came out with Version 1.0 sometime in 2004. Today, we are announcing V2.1. There will be a preview at DAC. Today, it is a quite mature technology, knowing the fact that we started with a 3rd generation out of the university. Version 2.1 represents a 4th generation codebase.

How big a company is CoFluent?
We are about 10 people.

How many customers do you have?
I would have to count them. I would say between 10 and 20.

Is there a particular end application for which this product is a particularly good fit?
We are working with semiconductor providers, terminal manufacturers, and equipment manufacturers with digital multimedia in the wireless and telecommunication industry. That is where we have most of our customers today.

What is the pricing for the products?
You have quite a range of prices depending upon node lock or floating and geographical options. It ranges from $12K to $150K.

So for $12K you have one module and for $150K all the modules?
$12K would be for a simple node locked license and the other price would be for all the modules with a floating license worldwide.

Is your sales model to sell direct?
Yes. We also have representatives in Israel and Germany.

How about Asia Pacific?
Not yet. It is coming, next in our agenda. Basically, Hagay was our first step for addressing the American market. Now we are thinking of the Japanese market. Something we can take care of in the coming year.

You have already explained the difference between CoFluent and other ESL offerings. Is there any other form of competition that you have encountered?
I would say that the competition is Excel spreadsheets. Today, system architects when they have to make the type of decisions we have been talking about very early in the design flow were until now using guestimation and Excel spreadsheet calculations. That’s not sufficient today. They need to get a dynamic view of the entire system including its application platform. Complexity is driving the adoption of this type of tool. With the complexity of today’s designs, a project can no longer afford to make its decisions on the basis of Excel spreadsheets. This leads them too often to wrong decisions and wrong directions that are very costly because as you know problems arising in designs are due to architectural and functional flaws, wrong decision taken early in the project, even before the hardware and software design has started. We are trying to address this particular problem area.


Review Article Be the first to review this article
Autodesk - DelCAM

Featured Video
Jeff RoweJeff's MCAD Blogging
by Jeff Rowe
Apple Adding To Computer Product Line Minus Ports
Currently No Featured Jobs
Upcoming Events
3D Collaboration & Interoperability Congress at 1310 Washington Ave. Golden CO - Oct 25 - 26, 2016
Electric&Hybrid Aerospace Technology Symposium 2016 at Conference Centre East. Koelnmesse (East Entrance) Messeplatz 1 Cologne Germany - Nov 9 - 10, 2016
Autodesk University Las Vegas at Las Vegas NV - Nov 15 - 17, 2016
TurboCAD pro : Start at $299

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering EDACafe - Electronic Design Automation GISCafe - Geographical Information Services TechJobsCafe - Technical Jobs and Resumes ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy Advertise