Last week at the Siemens PLM Connection 2017, I was introduced to several new products and technologies, and was reintroduced to a product that I had prior experience with, but needed a refresher as to where it stood today — Solid Edge ST10.
The latest release brings just about every aspect of product development forward with new design technology, enhanced fluid flow and heat transfer analysis, and cloud-based collaboration tools. Solid Edge ST10 makes it easier to optimize parts for additive manufacturing (AM) and obtain quotes, material selection and delivery schedules from AM service providers. Newly integrated topology optimization technology, combined with Siemens’ Convergent Modeling technology, improves product design efficiency and the ability to work with imported geometry.
Originally developed and released by Intergraph in 1996 using the ACIS geometric modeling kernel it later changed to the Parasolid kernel. In 1998 it was purchased and further developed by UGS Corp (the purchase date corresponds to the kernel swap).
Solid Edge ST10 Preview (Video Courtesy of Nancy Johnson).
In an effort to appeal to SMBs with Solid Edge ST10, John Miller, Senior Vice President and General Manager at Siemens PLM Software, said, “Digitalization is leveling the playing field, providing unlimited opportunities for small-to medium-sized businesses to disrupt industry.”
It’s not often (thankfully) that I cover two major conference events in the same week, but this week was exceptional (in a good way) — Siemens PLM Connection and RAPID + TCT 3D Printing & Manufacturing.
Siemens PLM Connection
The Siemens PLM Connection event in Indianapolis was a first timer for me and I got a lot out of it.
The major theme I came away with was Siemens’ push for what it calls the digital enterprise hub based on a digital twin.
There are many definitions of the digital twin, but for Siemens, a digital twin is a set of computer models that provide the means to design, validate and optimize a part, a product, a manufacturing process or a production facility in the virtual world. It does these things fast, accurately and as close as possible to the real thing – the physical counterpart. These digital twins use data from sensors that are installed on physical objects to represent their near real time status, working condition or position.
Siemens supports digital twins for product design, manufacturing process planning, and production through the Smart Factory loop and via the Smart Product.
A deployment of a digital twin includes three pillars: in product design, in manufacturing process planning and in feedback loops.
1. In product design. A digital twin includes all design elements of a product, namely: (more…)
Clean up after anything is not usually an especially enjoyable endeavor, even where subtractive or additive manufacturing processes are concerned. This is where post processing comes in.
The Problem with CAD In Subtractive Manufacturing
To cut parts using a CNC cutting machine, it has to be programmed with the path of the desired shape or nest of shapes. Most parts are designed with a CAD program where they are saved in a CAD drawing format, such as DWG, STEP, or several others.
But you can’t just take the CAD file and send it to a cutting machine. It has to be interpreted first, so the CNC on the cutting machine can understand it. The problem with CAD file formats is that:
They usually contain a lot of information that the CNC cutting machine doesn’t need or would find confusing, such as title blocks, Bills Of Material, dimension lines, borders, welding symbols, etc.
They usually have multiple layers, some of which are useful to the CNC and some of which the CNC needs to ignore.
They sometimes have many parts in one file, some of which might need to be cut on the CNC cutter, and some might need to be machined, cast, or sent to an EDM.
They don’t have all of the information needed by a CNC machine. Machines need to be told when to turn a process on and off, how to lead-in and lead-out from a part, etc. All of this information is referred to as the process technology.
I spent this week in the beautiful city of Ghent, Belgium for a series of company and product overviews at Bricsys at an event the company called Bricsys Insights.
For me, this was an introduction to and company and product line I had heard about, but didn’t have much knowledge about. This week that all changed for the better.
As a company, Bricsys has taken on several iterations over the years since it was founded in 2002, and has emerged today as a real player in the CAD markets for both architectural and mechanical design applications. The company currently has 130-140 employees, the majority being developers, so it is efficiently run and product focused. (more…)
A lot has gone on in the past couple months at metrology giant Hexagon AB, so let’s have a look.
For starters, Hexagon AB, announced recently that it was acquiring MSC Software Corp. for $834 million cash. While not quite as big, the acquisition is Hexagon’s largest deal since it bought Intergraph for $2.1 billion in 2010.
MSC Software is the the company that has brought products that include Nastran, Patran, Marc, and Apex to market. For more than 50 years MSC has been a leading provider of CAE solutions, primarily simulation software for virtual product and manufacturing process development, and was one of the first 10 commercial software companies.
As I noted when the acquisition was first announced, acquiring MSC provides Hexagon with a strong foothold in the competitive simulation/analysis market with MSC’s diverse portfolio of CAE applications.
Hexagon: Shaping Smart Change
As it has with all recent acquisitions, Hexagon plans for MSC to run as an independent business unit within the Hexagon Manufacturing Intelligence (MI) division, that focuses on automotive, aerospace, machinery, consumer electronics, and other discrete manufacturing markets, getting close to offering comprehensive end-to-end solutions in these diverse workflows. About the only link missing is a true CAD component and I can think of several possible targets for closing this gap.
Process-oriented solutions are essential for manufacturers, and MSC’s applications definitely address design and engineering processes through simulation and analysis.
I just returned from Scottsdale, Arizona after a great week at the annual Congress on the Future of Engineering Software (COFES) event.
Over the years I’ve attended probably 8-10 of these unique events, and they have all been a bit different, but I have always come away with new insights and perspectives on engineering software.
The keynotes are always thought provoking and the roundtable discussions and general conversations are stimulating, because they often provide food for thought and questions for further investigation rather than just simple answers.
One of the aspects I especially appreciate about COFES is that the company behind the conference, Cyon Research, strictly forbids blatant “selling” by attendees. For the most this request is honored, noted a few exceptions, but I ignored them. This event is meant to be more a meeting of the minds than an opportunity to capitalize on a captive audience.
This year’s theme was on the many facets of transformative complexity and how to understand it and take advantage of the benefits it can present.
At COFES, Everyone Is Encouraged (and Expected) To Participate
The growth of complexity in everything we do is presenting us with new and difficult challenges, from our constantly changing business environment, to conflicting requirements of more simplicity (to the customer) in products that require more complexity to deliver. New phenomena result from complexity, often requiring consideration of things that were not previously an issue. The demands of IoT, the emergence of additive manufacturing. And it’s not just products: emergent properties of complexity occur in processes, in IT, in business models, in politics, and in economies.
A few weeks ago at SOLIDWORKS World, I got reacquainted with a CAM company that previously I had only limited experience with – DP Technology. After talking with Don Davies, DP Technology’s VP of Americas, I came away from the event impressed with the company, as well as where DP Technology and its ESPIRIT product line are heading as ESPRIT CAM Software.
DP Technology Corp is a privately held company co-founded in 1982 by Daniel Frayssinet and Paul Ricard. The company gets its name from the first names of the co-founders – (D)an and (P)aul. The company’s corporate headquarters is in Camarillo, California. The rest of the company is structured by function with offices in France, Germany, India, Italy, China, and Japan. DP Technology is the developer of the diverse ESPRIT CAM system sold and supported via the company’s regional offices and its network of resellers throughout the world. ESPRIT has also developed close partnerships with several leading milling, turning, and wire-EDM machine tool manufacturers, such as Okuma, Mazak, DMG Mori, Citizen, Mitsubishi, GF AgiCharmilles, and Sodick.
As has been the case for several years, not all computer users need a workstation-class machine, but many do, especially with graphics-oriented and computationally intensive applications, such as MCAD, FEA, and animation. However, high-powered workstations for graphic-intensive applications can come with a price premium. So, you can really pay a relatively high price for higher levels of performance, but is often worth it. There are exceptions, however, and the HP Z2 Mini workstation offers the best of both worlds – a versatile machine with excellent performance at a reasonable price.
I’d classify the HP Z2 Mini as a mid- to high-level machine that provides just about everything most customers would need in a desktop engineering workstation. Then there’s added benefit of the small footprint, which can be huge in a tight work environment.
At software conferences it’s always fun to catch up with old industry acquaintances, but is more interesting to strike up conversations with new companies with innovative ideas. That very thing happened a few weeks ago at SOLIDWORKS World 2017 when we got introduced to Xometry, a company committed to bringing manufacturing back to the U.S. with its software platform for building a reliable and scalable manufacturing program. It employs a unique machine-learning approach that provides its customers with optimal manufacturing capabilities at the best price based on parameters input by customers.
Founded in 2014, Xometry is hoping to transform American manufacturing through a proprietary software platform that provides on-demand manufacturing to a diverse customer base, ranging from startups to Fortune 100 companies. The platform provides an efficient way to source high-quality custom parts, with 24/7 access to instant pricing, expected lead time and manufacturability feedback that recommends best processes and practices. With more than 100 manufacturing partners, the manufacturing capabilities include CNC machining, 3D printing, sheet metal forming and fabrication, and urethane casting with over 200 materials. Xometry’s 4,000+ customers include General Electric, MIT Lincoln Laboratory, NASA, and the United States Army.
While it seems that central processing units (CPUs) get all the glory for computing horsepower, graphical processing units (GPUs) have become the processor of choice for many types of intensively parallel computations.
As the boundaries of computing are pushed in areas such as speech recognition and natural language processing, image and pattern recognition, text and data analytics, and other complex areas, researchers continue to look for new and better ways to extend and expand computing capabilities. For decades this has been accomplished via high-performance computing (HPC) clusters, which use huge amounts of expensive processing power to solve problems.
Researchers at the University of Illinois had studied the possibility of using graphics processing units (GPUs) in desktop supercomputers to speed processing of tasks such as image reconstruction, but it was a computing group at the University of Toronto that demonstrated a way to significantly advance computer vision using GPUs. By plugging in GPUs, previously used primarily for graphics, it became possible to achieve huge performance gains on computing neural networks, and these gains were reflected in superior results in computer vision.