This week MakerBot announced that it had sold more than 100,000 3D printers worldwide. The company said it was able to reach this milestone (as the first 3D printer company to do it) by providing an accessible, affordable, and easy-to-use 3D printing experience.
“Being the first company to have sold 100,000 3D printers is a major milestone for MakerBot and the entire industry,” said Jonathan Jaglom, CEO at MakerBot. “MakerBot has made 3D printing more accessible and today is empowering businesses and educators to redefine what’s possible. What was once a product used only by makers and hobbyists has matured significantly and become an indispensible tool that is changing the way students learn and businesses innovate.”
MakerBot was one of the first companies to make 3D printing accessible and affordable. Since its founding in 2009, MakerBot has pushed 3D printing and has introduced many industry firsts. Thingiverse was the first platform where anyone could share 3D designs and launched even before MakerBot was founded. In 2009, MakerBot introduced its first 3D printer, the Cupcake CNC, at SXSW. In 2010, MakerBot became the first company to present a 3D printer at the Consumer Electronics Show (CES). Now, 3D printing is its own category at CES with a myriad of 3D printing companies from around the world in attendance each year.
While proponents (usually with deep pockets) have touted their benefits, software patents have also been used in the software industry to suppress innovation, kill competition, generate undeserved royalties, and make patent attorneys rich. So I ask, are software patents still relevant?
It’s no secret that the engineering software business is extremely competitive, as it always has been. The engineering software business has also proven to be a very fertile ground for lawsuits regarding patent infringement, reverse engineering, and outright copying and pasting blocks of code.
Could stronger patent protection have prevented this from happening? Maybe yes, but probably, no.
The Danger of Software Patents – Richard Stallman
Software patents has been hotly debated for years. Opponents to them have gained more visibility with less resources through the years than pro-patent supporters. Through these debates, arguments for and critiques against software patents have been focused mostly on the economic consequences of software patents, but there is a lot more to it than just money.
Earlier this week many of us in the MCAD community were saddened to hear of the passing of Andrew (Andy) Grove, the former CEO and Chairman of Intel Corp. He was one of the most acclaimed and influential personalities of the computer and Internet eras, as well as being instrumental in the development and proliferation of the CAD software as we know it today that runs on PCs.
Born András Gróf in Budapest, Hungary in 1936, Mr. Grove came to the United States in 1956. He studied chemical engineering at the City College of New York, completing his Ph.D at the University of California at Berkeley in 1963. After graduation, he was hired by Gordon Moore (of Moore’s Law fame) at Fairchild Semiconductor as a researcher and rose to assistant head of R&D under Moore. When Robert Noyce and Moore left Fairchild to found Intel in 1968, Mr. Grove was their first hire.
He became Intel’s President in 1979 and CEO in 1987, and served as Chairman of the Board from 1997 to 2005. During his time at Intel and in retirement, Grove was a very influential figure in technology and business, and several business leaders, including Apple’s Steve Jobs, sought his advice.
Mr. Grove played a critical role in the decision to move Intel’s focus from memory chips to microprocessors and led the firm’s move as a recognized consumer brand. Under his leadership, Intel produced the chips, including the 386 and Pentium, that helped foster the PC era. The company also increased annual revenues from $1.9 billion to more than $26 billion.
Just as we could have rode into the sunset, along came the Internet, and it tripled the significance of the PC.
Like many of the ingredients in a manufacturing organization’s computer technology alphabet soup, such as ERP, SCM, CRM, not to mention CAD, CAM, and CAE, product lifecycle management (PLM) for years has been touted as being the final frontier for integrating all manufacturing IT functions. Honestly, though, can it truly provide all that the various vendors are promising? I have asked myself that question for several years now: Is PLM a great hope or just another great and continuing hype?
It seems that every vendor defines PLM in a manner that best suits their respective existing product lines and business practices, and not always necessarily the processes of the customers they are trying to serve. Therein lies a big part of the PLM problem. PLM should address processes and not just products, especially the vendors’. Too few vendors still stress the processes they are claiming to improve over the products (and perpetual services) they are selling.
It also seems like everybody (yes, now including just about every CAD vendor big and small) has at least tried to get into the PLM act, regardless of whether they should or should not based on their development and integration capabilities or the needs of their customers. Even database giant, Oracle, has said for years that it wants to be a major PLM player, although the company has eluded that it doesn’t want to dirty its hands with traditional CAD/CAM stuff. Oracle wants to look at the bigger picture, although it has never elaborated on what that picture is.
In a major move last week, Autodesk and Siemens announced an interoperability agreement aimed at helping manufacturers decrease the huge costs associated with incompatibility among product development software applications and avoid potential data integrity problems. Through this agreement, Autodesk and Siemens’ product lifecycle management (PLM) software business will take steps to improve the interoperability between their companies’ respective software offerings. The agreement brings together two CAD heavy hitters with the common goal of streamlining data sharing and reducing costs in organizations with multi-CAD environments (and these days, who doesn’t have a multi-CAD environment?).
The interoperability agreement aims to decrease the overall effort and costs commonly associated with supporting these environments. In particular, the companies are hoping that interoperability between the offerings from Siemens and Autodesk will significantly improve the many situations where a combination of each other’s software is used. Under the terms of the agreement, both companies will share toolkit technology and exchange end-user software applications to build and market interoperable products.
“Interoperability is a major challenge for customers across the manufacturing industry, and Autodesk has been working diligently to create an increasingly open environment throughout our technology platforms,” said Lisa Campbell, vice president of Manufacturing Strategy and Marketing at Autodesk. “We understand that our customers use a mix of products in their workflow and providing them with the flexibility they need to get their jobs done is our top priority.”
“Incompatibility among various CAD systems has been an ongoing issue that adversely affects manufacturers worldwide and can add to the cost of products from cars and airplanes to smart phones and golf clubs,” said Dr. Stefan Jockusch, Vice President, Strategy, Siemens PLM Software. “Siemens has been at the forefront in helping to resolve this incompatibility issue with a wide variety of open software offerings that significantly enhance interoperability. This partnership is another positive and important step in our drive to promote openness and interoperability and to help reduce costs for the global manufacturing industry by facilitating collaboration throughout their extended enterprises.”
For as long as I can remember, cloud storage and computing have offered only one thing – endless promises and perpetual growth. For a while that was true, but some things have happened in the past couple of years that temper those claims and may portend what may happen in the future for technology providers that become increasingly reliant on the cloud – layoffs.
Cloud computing, or internet-based computing provides shared processing resources and data to computers and other devices on demand. From the beginning it was intended as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort.
Proponents have always claimed that cloud computing allows companies to avoid upfront infrastructure costs, and focus on projects that differentiate their businesses instead of on infrastructure. Proponents have also claimed that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables IT to more rapidly adjust resources to meet fluctuating and unpredictable business demand. Cloud providers typically use a “pay as you go” model. This can lead to unexpectedly high charges if administrators do not adapt to the so-called cloud pricing model.
To a large extent most of these claims have proven true, and I have been a proponent for many aspects of cloud computing, but there is also a downside – generally, you just don’t need as many people to run and maintain a cloud-based organization.
The downside is that you will have limited customization options. Cloud computing is cheaper because of economics of scale, and like any outsourced task, you tend to get what you get. A restaurant with a limited menu is cheaper than a personal chef who can cook anything you want. Fewer options at a much cheaper price: it’s a feature, not a bug and the cloud provider might not meet your legal needs. As a business, you need to weigh the benefits against the risks.
Last week, in Part 1, I ended the blog by saying that if you can’t fix something, you don’t own it. I still stand by that statement. This week will continue the discussion for those of us who want some control over the devices we own and use and not the vice versa.
Just a couple weeks ago, Bloomberg columnist, Adam Minter, asked in an article he wrote, “Why Can’t You Repair an iPhone?”
In the article, he says, “Imagine if Ford remotely disabled the engine on your new F-150 pickup because you chose to have the door locks fixed at a corner garage rather than a dealership. Sound absurd? Not if you’re Apple. (more…)
You don’t truly own something that you can’t get into to modify or repair.
–Gathered from many wise sages over the years (especially the past 10)
I’ve got an iPhone 4S that’s a few years old and I still love it. I like the size, the feel, and I’ve purchased a number of accessories designed specifically for it. I’ve also rescued it from dropping it in water, and know how to replace the battery, as well as the glass back and the front screen. These self-repairs are officially no-no’s according to Apple, and aren’t easy, but knowing how to repair the phone I still really like and keeping it 100% functional, I intend to hold onto it until something happens that I can’t resolve, such as a surface mount component failure.
I’m probably not like a lot of consumers, so I don’t necessarily constantly need the latest and greatest. I’d rather maintain and repair what I have as long as I can. After all, I view my phone, cameras, and computers as tools that should be made to last, and not precious possessions on the one hand, or mere throwaway items on the other.
My journey to fixing my own stuff started a number of years ago with an excellent resource call iFixit – a free online series of repair manual for tinkering with thousands of products. The goal of iFixit was to teach virtually anyone how to fix the stuff they own — ranging from laptops to snowboards to toys to cell phones. In other words, iFixit is part of a global network of “fixers” trying to make the stuff they own last as long as possible.
Makers put things together; fixers take them apart and rebuild them. Tinkerers are a little bit of both, and are much more than just consumers — they are participants in the things we make, own, and fix.
This might sound great, but over the years, I have found that this participation — tinkering with products made by others — puts both makers and fixers at odds with manufacturers. (more…)
It’s already mid February, and with two months well on their way to being history this year, it’s not too late to tell our readers about what we’ll be covering for the remainder of 2016. The MCADCafe editorial calendar below reflects what we perceive as some of the most important topics today, as well as feedback from our readers and other supporters with what they feel is important and relevant.
The main theme for each month will be covered in an extended article or series of articles so that the topic can be covered more comprehensively.
We’ll also be covering some of the major MCAD events throughout the year, reporting what we see and hear from vendors, partners, and attendees. All of the events we attend will include daily written coverage and Tweets throughout event days, as well as video and audio interviews.
If there is anything we missed or if you have any thoughts of topics or events you would like to see covered in 2016, feel free to contact me directly at email@example.com or 719.221.1867. I’m always open to suggestions and new ideas!
We look forward to an exciting 2016 and providing you with the MCAD content you want most for improving your design, engineering, and manufacturing processes and top and bottom lines.
Keep MCADCafe.com your source for all things MCAD because 2016 promises to be a great year!
Being the editor of MCADCafe, I am constantly on the lookout for innovative software and hardware products that make working life better for designers and engineers. While some of these products are truly unique, many are retreads and “me too’s” of existing offerings.
Lately, I’ve been especially watchful on the hardware platform front, because it doesn’t seem as compelling as it once was, much to the credit of escalating cloud-based hardware and software services.
However, something really caught my eye last year – the HP Sprout – a computing platform that is truly unique because it is a desktop computer but is also has an integrated 3D scanner for 3D object capture and editing as well as 3D print options.
In a nutshell, the Sprout is a relatively high-end Windows 8 computer with a novel two-screen configuration and advanced cameras, which combined can make some creative activities possible. The second display, on a desktop touch sensitive mat, is a major advance in the physical user interface for computers.