Jyotirmoy Dutta Jyotirmoy Dutta works as a PLM consultant with more than a decade of expertise in PLM Strategy Consulting, Solution Architecting, Offshore Project Management and Technical Leadership. He has led several full life-cycle PLM implementations, in the Consumer Products, Electronics & High Tech, Industrial Equipment and Medical Devices industries.
LinkedIn Profile « Less
Jyotirmoy Dutta Jyotirmoy Dutta works as a PLM consultant with more than a decade of expertise in PLM Strategy Consulting, Solution Architecting, Offshore Project Management and Technical Leadership. He has led several full life-cycle PLM implementations, in the Consumer Products, Electronics & High Tech, … More »
A couple of months back I was in a meeting with a group of client business process leaders when the topic of idiot proofing their new PLM system came up. Pretty interesting discussion ensued for some time which led me to think, read and eventually write about the topic. Idiot proofing and more formally fool-proofing essentially means to build products which can be used or operated with very little risk of breakage or failure, by predicting all possible ways that an end-user could misuse it, and designing the product to make such misuse unworkable, or to at least diminish the negative consequences. Euphemisms like Hardening, Defensive Programming, Bullet Proofing, Fault Tolerant, Gold-Plating, Human Proofing, Worst Case Scenario Proofing, Robustification also exists which essentially show the equivalent.
A related Japanese term used in the manufacturing industry is “Poka-Yoke” (poka joke) from the words “poka” (which means inadvertent mistake) and from “yoke” (which means to prevent) means “fail-safing” or “mistake-proofing”. “A poka-yoke is any mechanism in a lean manufacturing process that helps an equipment operator avoid (yokeru) mistakes (poka). Its purpose is to eliminate product defects by preventing, correcting, or drawing attention to human errors as they occur. The concept was formalised, and the term adopted, by Shigeo Shingo as part of the Toyota Production System or Lean Manufacturing.” [http://www.shmula.com/category/lean/poka-yoke/]. Mistakes are inevitable; people cannot be expected to concentrate on all the time, or always to understand fully the directives they are given. On the other hand, defects result from allowing a mistake to reach the end-user, and are entirely avoidable. The goal of Poka yoke is re-designing/engineering the process so that mistakes can be prevented or immediately detected and corrected.
Why would you want to poka-yoke your PLM application?
So you have bought a commercial PLM application which has been reasonably well-developed and well-tested and you have it implemented in your company. Why would you then want to poka-yoke (mistake-proof) your PLM application (apart from regular testing that happens before it goes live)? Two reasons:
Reduce support calls after system goes live: PLM administration is costly because it needs a highly trained/experienced person to administer the installation and troubleshoot issues – You do not want to burden the administrator with frivolous snags because the end users are breaking stuff all the time and need help.
Garbage In – Garbage Out The quality of the data in the PLM systems is as dependent on what goes in. As the book Universal Principles of Design says: “The garbage-in metaphor refers to one of two kinds of input problems: problems of type and problems of quality. Problems of type occur when the input provided is different from the input expected. Problems of quality occur when the input received incorrect information in the correct type.”
Poka-Yoke in Software Development vs. Poka-Yoke in Enterprise Software Implementation
Software developers like Harry Robinson in “Using Poka-Yoke Techniques for Early Defect Detection” and Gojko Adzic in “The Poka-Yoke principle and how to write better software” and Aaron Swartz in The Pokayoke Guide to Developing Software have advocated the use of poka-yoke in software development. However enterprise software deployment/implementation is different – there are now and then way too many users and a plethora of permutations/combinations which they can (or would) use/misuse the software – such use cases are sometimes hard to predict upfront (while developing the software) and hence the need to implement poka-yoke devices during the real life implementation. I cannot comment about users of other enterprise applications, but PLM users often work in situations necessitating substantial technical skills, where training/adoption or employee turnover cost is high and where interruptions and distractions are all too common. Such settings result in human error (whether due to Distraction, Tiredness, Confusion, De-motivation, Lack of Practice/Training, Uncertainty, Lack of standardization, Willful (ignoring rules or procedure), Inadvertent or sloppiness etc. is a different issue altogether) which might as an end result lead to GIGO and more support calls.
Poka Yoke Before or After Implementation?
Mistake proofing can be done upfront only till the point where it is known what mistakes might be made (which can happen only after a thorough system testing). However to add a good poka-yoke solution for a problem, the problem needs to be defined first (along with things like when where and Read the rest of this entry »
A few days back there was this article in Reuters “Samsung’s advanced TVs go missing en route to Berlin” – big deal? Well presumably so – because it’s being suspected as a case of industrial espionage – “… it may have been a theft aimed at stealing the advanced TV technology, whose loss could cost the firm billions of dollars.” Just to set the background these sets have gone missing while on their way to the IFA consumer electronics trade show which opened to the public on Aug. 31 (and ran till Sept. 5) in Berlin. Samsung’s advanced OLED TV’s that were being debuted in this show were touted as the successor to LCD TV’s with a rumored price tag of $10,000 for the 55-inch model.
Investopedia explains “Industrial Espionage” as “…describes covert activities, such as the theft of trade secrets, bribery, blackmail and technological surveillance. Industrial espionage is most commonly associated with technology-heavy industries, particularly the computer and auto sectors, in which a significant amount of money is spent on research and development (R&D).” The Independent in its recent article “The art of industrial espionage” puts this succinctly as “…in a world where the biggest corporations easily outstrip the GDPs of small nations, corporate intelligence is almost as grand a game as its government-run counterpart”. In the US the situation is so bad that the FBI has stepped in with a new campaign that targets corporate espionage. The Office of the National Counterintelligence Executive in its October 2011 report to the US Congress on Foreign Economic Collection and Industrial Espionage, 2009-2011 reveals some startling figures:
Estimates on the losses (to the US economy) from economic espionage range so widely as to be meaningless—from $2 billion to $400 billion or more a year—reflecting the scarcity of data and the variety of methods used to calculate losses.
Germany’s Federal Office for the Protection of the Constitution (BfV) estimates that German companies lose $28 billion-$71 billion and 30,000-70,000 jobs per year from foreign economic espionage.
South Korea says that the costs from foreign economic espionage in 2008 were $82 billion, up from $26 billion in 2004.
The 2012 Data Breach Investigations Report by the Verizon RISK Team (with cooperation from the Australian Federal Police, Dutch National High Tech Crime Unit, Irish Reporting and Information Security Service, Police Central e-Crime Unit, and United States Secret Service) highlight some interesting details on this topic:
So what’s the point?
The reason I choose to highlight security issue in this article was because many PLM champions espouse of just “good enough” security for the PLM infrastructure or the application for that matter and may IT managers don’t seem to be too much bothered by that fact, which I think is not right. A PLM system has the information of the entire lifecycle of a product from its conception, through design and manufacture (and to probably service and disposal) – you don’t want that data stolen away like it happened for American Superconductor Corp. or Renault. And if you think it cannot happen anywhere near home read about the ACAD/Medre.A worm that steals AutoCAD Designs and sends to China.
When it involves PLM security, there are a number of things to consider. You might want to consider securing the data (by implementing role based access), securing the application as a whole, securing the database and even securing the data center. Last year I had published a detailed post on various ways to affect this outcome – you can read it here. Essentially several security standards exists (like PCI SSC Data Security Standards and ISO/IEC 27001:2005) and companies should work towards security their “bread and butter”.
I was reading this article in Slashdot over the weekend: “Big Surprise: Cloud Computing Still Surfing Big Hype Wave”. While referring to the hype cycle graph, it goes on to state: “In Gartner’s estimation, cloud computing has entered the Trough of Disillusionment stage of the Hype Cycle… Even as its hype fades, though, cloud computing can look further along the curve to the Slope of Enlightenment—when businesses discover the true utility of the technology, without the confusing hype and buzzwords—and then the Plateau of Productivity, where it can join predictive analytics as technologies people use without chattering incessantly about it on Twitter. Gartner believes that cloud computing and private cloud computing will reach their plateau of productivity in 2 to 5 years, while hybrid cloud computing will take closer to 5 to 10 years. At that point, the inflated expectations—and the screaming hype—should be a thing of the past.”
Cloud solutions are new to PLM – and there are a number of cloud PLM solutions in the market now. While the proponents have talked about improving the ROI of PLM, by reducing the implementation and maintenance cost, I do not find much being discussed in details about the possible risks. I am not a “cloud hater” but I think manufacturers need to think of the implications deeply before moving their PLM system into the cloud. It is important to note that cloud computing can refer to several different service types, including Application/Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). The risks and benefits associated with each model will differ and so will the key considerations in contracting for this type of service. I will cover 5 risks here with focus on PLM and Application/Software as a Service (SaaS). From the NIST Definition of Cloud Computing, Software as a Service (SaaS) implies that “The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings”. So what are these risks?
1. Cloud Uptime
I think the biggest concern would be system uptime. How much would the business suffer if the system uptime deviated considerably from the agreed SLA’s? If you are one of those who needed to blog “Life of our patients is at stake – I am desperately asking you to contact” during the Amazon EC2 outage in 2011, then you need to think about service interruption seriously. While outages of mission critical applications are nearly never excusable and undoubtedly hurting to business, I think the biggest learning from outages of EC2 /Azure etc seems to be the lack of real-time response and qualified explanations. A privately hosted system would have unscheduled downtime too, but in that case the organization’s IT staff would have much more be in command of in resolving it. What options would a cloud PLM vendor offer to offset any such business disruption? How much would it add up in additional costs? It is off course to be expected that when moving from a system with guaranteed availability of 90% (with downtime of 36.5 days/year) to one with 99% availability (with downtime of 3.65 days/year) or with 99.9% availability (with downtime of 8.76 hours/year) costs would naturally increase. Would such costs be in line with expected savings of going live with PLM in the cloud?
2. Enterprise Application Integration
A few months back I had written about this topic: “Cloud Based PLM and Enterprise Application Integration” where I wrote “PLM being an upstream enterprise application (design usually preceding manufacturing/sales/procurement/service) needs to draw upon several collaborating systems…Typical application integration scenarios which are routinely met would include: CAx and Office Suite Integration, Legacy System Integration, and MRP/ERP Integration.” The current bunch of cloud based PLM systems seem to be lacking in addressing this aspect.
Apart from this, another crucial aspect to be evaluated is the ability to effectively manage complex, multi-CAD data. The system must be capable of integrating the BOM and enabling multidisciplinary 2D/3D visualization of such heterogeneous/multi-CAD data in a single product structure and make it easier for design teams to find, reuse, and synchronize accurate data with their MCAD/ECAD tools.
Customers switching PLM platforms due to shifting business needs is not uncommon – Such migrations need tools, procedures, standard data formats and services interfaces that promise data and service portability. In case of cloud PLM if there is a need to migrate from one provider to another or migrate data and services back to an in-house IT environment then such options needs to be validated. A few months ago Stephen Porter in his “Zero Wait-State” blog wrote about the harrowing experience one of his client went through when migrating from a Cloud based PLM system thereby highlighting the fact that cloud providers may have an incentive to prevent (directly or indirectly) the portability of their customers services and data. Hence it would be prudent to know certain things in advance and if possible in the form of a formal agreement:
→ How to get data back if you stop subscription,
→ Availability of API calls to read (and thereby ‘export’) that data,
→ Any extra costs associated with exporting data (specially heavy CAD data),
→ Availability of data sanitization procedures (a.k.a True Wiping, Secure erase etc.) after the client is no longer a tenant etc.
→ Is there a guaranteed minimum download speed of data?
4. Legal/Regulatory Risks
Over the past couple of years PLM vendors have substantially enhanced their regulatory compliance capabilities (ITAR, RoHS, WEEE, ELV or FDA 21 CFR Part 820). I worked for a medical devices manufacturer and know complying with some of the regulations is a tough task. Hence there are certain areas customers would need to pay attention to when appraising contract clauses for cloud PLM services (though on a case by case basis):
→ Where will the data be physically located? Would access control to technical data be based on user citizenship, physical location etc so as not to violate any ITAR or EAR restrictions? This is important from jurisdiction perspective over data protection and ownership and for law enforcement access.
→ Can the provider make available a full audit detailing technical data exports to satisfy regulatory compliance reporting requirements?
→ If the provider patches the system for software defects or upgrades it to the new release, can the customers in some way validate it in lines with FDA guidance on software validation and avoid 483s and/or Warning Letters
5. Supplier Stability
Some time back IndustryWeek in an article “Understanding Risk: Avoiding Supply Chain Disruption” noted “A supply chain disruption can cost a manufacturer up to $5 million, irreparably harm a brand and drive customers straight to the door of a competitor.” Cloud PLM is still an emerging market – and as with an emerging market, supplier consolidation and business casualties (like bankruptcies) can happen. Acquisition of the cloud provider could amplify the chances of a strategic shift and may put non-binding agreements at jeopardy while supplier collapse like the company ceasing to exist has the potential to nullify any signed contracts. So what happens to the vital IP in the cloud PLM system in such cases? Source code and data backup escrow might offer some solace thought it is not likely to be the silver bullet.
An old Chinese proverb says “One cannot refuse to eat just because there is a chance of being choked” – likewise the above are risks – With appropriate risk management strategy in place they surely can be contained.
Over the weekend while searching for home office furniture I stumbled upon Herman Miller’s website. What I found interesting was that they had not only put 3D Models and CAD information (3DS and DWG files) but also enormous amount of product specific environmental information. For an example look here at the page for Mirra Chairs – On the right hand side you can see how recyclable that product is, Environmental Product Summary and Recycling Instructions, the LEED Calculator and various certifications including BIFMA level™, GREENGUARD and MBDC Cradle to Cradle. They also have an ecoScorecard calculator which lets the customer know how sustainable their products are. Apart from manufacturing location and environmental certifications, the scorecard also tells about the environmental characteristics of the product like pre-consumer recycled content, post-consumer content, and whether life-cycle assessment was completed. Herman Miller efforts to promote sustainability as won it many awards and the company is recognized as a leader in the ecodesign space.
Another company doing similar work is Apple. Apple reports environmental impact expansively and have “Product Environmental Reports” for all currently shipped and obsolete products wherein particulars of the products environmental performance as it relates to climate change, energy efficiency, material efficiency, and restricted substances are documented. Further examples would include AmazonGreen which is a cross-category program that includes a list of products that customers have selected as the best green products offered by Amazon.com and Nike’s “Considered Design” products.
It is not very frequently that one sees such comprehensive environmental reports about a company’s products. Sustainability is just not a fad – customers are progressively demanding more sustainable products on one hand and environmental regulations are getting stringent on the other. Business intellectuals like late C.K. Prahalad advised (in the article “Why Sustainability is now the Key Driver of Innovation”) that “Sustainability isn’t the burden on bottom lines that many executives believe it to be. In fact, becoming environment-friendly can lower your costs and increase your revenues. That’s why sustainability should be a touchstone for all innovation”. Even governmental agencies like U.S. EPA have programs like Design for the Environment to “help consumers, businesses, and institutional buyers identify cleaning and other products that perform well, are cost-effective, and are safer for the environment”.
Having worked with PTC’s former “Environmental Compliance Solution” a few years back I know a thing or two here. Traditional tools in this space cannot effectively handle the product analytics requirements of complex products both for functional requirements like Supplier Declaration Management, Material/Substance Management, Reporting, Instantaneous Compliance and Environmental Impact Analytics, Business Process and System Integration, Workflow and Notification Management or for non-functional requirements like Performance & Scalability, internationalization and localization, Usability etc.. Manufacturers need to embrace “design for environment” strategies and processes that facilitate them to more effectively and efficiently improve the environmental performance of their products is real, and they will need to identify a best in class and modern solution to help them meet that goal.
I have worked on PLM applications for the last decade, by and large in the implementation and customization side. Customizations have been and are often pigeonholed as the problem child of PLM. Many argue that the best way to put into service a PLM system is to have it implemented “out-of-the-box”. However, when business processes in an organization cannot be effectively and efficiently modeled in a “vanilla software” PLM system, the impact of an assessment whether to or not to customize becomes pertinent. I write this post based on my familiarity with PLM customization in diverse implementations.
Definition of Customization
I would conceptually characterize customization as an alteration put into place because the “vanilla software” PLM solution does not reflect the “desired” business needs. Changing packaged software to meet user needs is the essence of customization.
Decision to Customize
PLM systems are packaged software solutions and not customized systems. Packaged solutions traditionally involve software and/or services tailored to achieve a specific scope of work and are intended to meet the broad-spectrum needs of a class of organizations, rather than the unique needs of a particular organization, as is the case in custom software development. By adopting standard packages, organizations substantially reduce the costs, risks and delays associated with custom software development, and benefit from the on-going support services provided for packages by vendors. Conversely, packaged solutions come with built-in assumptions and procedures about organizations’ business processes. These suppositions and rules seldom tie in with exactly with those of the implementing organization’s existing processes. The so-called “industry best practices” embedded in PLM systems is hardly universal – the misfits between business requirements and PLM capabilities can be company-specific, industry-specific, or even country-specific. Any successful PLM implementation requires a fit between the PLM system and the organizational processes it supports. A divergence can have substantial bearing on organizational acceptance, and could be one of the main reasons of a PLM implementation failure.
More often for misfits, customization is the way out. PLM system customization can ensue either during the implementation phase when the gaps are well-known or after roll-out, when the system is operational, and as a response to changing business needs. Customization at times offers the ability to obtain competitive advantage vis-à-vis companies using only standard features. However they come with a cost and risks. Hence the assessment to customize is complex and is so made with a trade-off in mind.
Types of Customizations
I would categorize most of the customizations I have seen or done in the following order:
1. Configuration Customization,
1.1. Pure Configuration Update
2. Process Customization (Workflow Programming), and
3. Technical Customization
3.1. Extended Reporting,
3.2. UI Customizations,
3.3. PLM Programming,
3.4. Interface development,
3.5. Package Code Modification
Let us look into each in more detail:
1. Configuration Customization: I would subdivide this customization into two sub types:
a. Pure Configuration Update: This type of customization involves setting of parameters, properties etc. Examples: Server settings, logging parameters etc. It usually is benign, and affects the application and sometimes the database layer. Such updates are vendor supported too.
b. Bolt-Ons: Bolt-Ons are third-party packages designed to work with the PLM system from independent software vendors (under license agreements with the original PLM vendor) to meet the needs of a particular customer segment and provide specific functionality. Example: The ShapeSpace 3D Search tool for Aras PLM which extends the searching capabilities of Aras to enable searching by shape or FLUENT for CATIA V5 software brings fluid flow and heat transfer analysis into the CATIA V5 product lifecycle management (PLM) environment. The quality of the bolt-on depends on the quality of the relationship between the PLM vendor and the bolt-on developer. Note that there may be a “release lag”, where the bolt-on vendor is supporting an older release of the PLM system that the one the PLM vendor is currently offering to customers. This is likely to be an issue during PLM system upgrade.
2. Process Customization: This type of customization involves Workflow Programming – Creation of organizationally unique workflows or customized logic in standard workflows. Example: Set up automated engineering change order approval process. This typically affects the application layer and/ or database layer.
3. Technical Customizations: These are code changes that the vendor usually does not support and can be split into 5 categories:
a. Extended Reporting: This type of customization is programming of extended data output and reporting options. Example: Design a new report to show the health of each project and compare it to the time taken in each portion of the change management process for specific criteria. This typically affects the application layer and/ or database layer.
b. UI Customizations: This involves changing the UI Look and Feel or functionality and changes to terminology and layout. Such customizations might occasionally add extra business logic specific UI validations too. I was consulting for a major medical devices manufacturer in 2006 – they had decided to upgrade their old PLM system with two major versions in a single upgrade. Fearful of adoption and training issues in their 7000+ user base they resorted to reprogram the UI to look as much as possible as the original system even though the new system provided with lesser picks and clicks.
c. PLM Programming: I would define this as programming of additional applications, without changing the product source code (using the computer language used by the vendor). For example if there is a requirement to detect when a particular event happens in the lifecycle of a drawing or a product and to take further actions based on that (which the PLM system doesn’t support) then one would need to resort to such programming. This typically affects all layers.
d. Interface Development: Programming of interfaces to legacy systems or 3rd party products when such interfaces are not available off the shelf. I have implemented interfaces with custom-build MRP package or with an archaic CRM package. This typically affects the application layer and/ or database layer.
e. Package Code Modification: This type of customization is atypical, and involves changing the product source-code – ranging from small changes to changing whole modules / functionality. An example would be changing of the system behavior when a CAD file is checked into the system from a CAD tool. In 2004 I was consulting for a Taiwanese OEM and the PLM tool they were using was quite immature as compared to their requirement – leading to vast amount of changes in the underlying product. Product source code was not available – we had to decompile the code to make modifications. Needless to say such customizations point towards a bigger issue which I will discuss shortly.
Risks of Customization
The risk impact of customizations can be classified as underneath – The deeper the customization the higher is the impact and likelihood of risk.
If customizations are built as part of a development effort during an implementation time frame, they may have bugs which can impediment development during the PLM implementation, and thus distress a successful implementation (e.g. overspent budget and an unreliable system due to poor quality of customization, unresolved system bugs and insufficient testing). Also note that there is supplementary end user training and consulting costs towards adoption of customized code. By and large, less customization will mean shorter implementation times.
Over-reliance on heavy customization suggests a deeper problem – It can mean that due to poor PLM selection and evaluation process, the software is ill-fitting with the business requirements.
Customization of PLM will have maintenance and upgrade impact too. The more complex a customization endeavor the more likely it is to require greater effort in maintenance and post-implementation. Each time a change/upgrade is required to the system, the effect of the change on the customization will have to be assessed by the organization, as the software vendor will not support these customizations. Many times, this requires bringing in an external expert to help with this assessment. Once the system is upgraded the customizations would have to be manually updated too. These additional requirements reduce flexibility or agility of the system and increase costs.
The following table presents an estimate, based on experience, of the effort needed during system maintenance and post implementation of different customization types.
Another associated issue is poor consultant effectiveness. Poor consultant effectiveness will contribute to mediocre quality of code customizations (and associated documentation) and inferior knowledge transfer of the customizations to the organizations IT resources. This in turn leads to higher maintenance costs. The better the knowledge consultants have about their PLM package, the more likely they are to address business objectives with light (and probably configuration driven), rather than (code driven) heavy, customizations.
Configuration vs. Customization
Customers are almost always better off in choosing a PLM system with an open architecture, one that can often be adapted with relatively uncomplicated configuration changes rather than code rewrites. It is vital to realize the key differences between customization and configuration to appreciate the benefits:
Wrapping up, organizations should steer clear of a dash to customize. It is to be expected that any new PLM system will have inadequacies, both factual and alleged. All workarounds and alternatives should be investigated first before making a commitment to changing the system. It is important to take into consideration that though customization might bring in a competitive gain, there are risks and costs associated with different types of customization. The heavier the customization is the riskier and costlier it becomes.
I still remember when I took my first cell phone, an “indestructible” Nokia 3310, to the OEM service center to get it “flashed” when it started showing problems. The process took 3 days. A year or two later (when I updated to a newer model) I could do this all by myself with a USB data cables and a desktop software from the device maker. Now-a -days everything is done “over-the-air” – courtesy FOTA (Firmware Over-the-Air) and a bunch of other related technology like FUMO (Firmware Update Management Object), SCOMO (Software Component Management Object), OTA (Over-the-air programming) etc. which enables remote operations, such as install, update or remove for software components in a device such as a cell phone or a PDA.
Another common automatic update scenario most people regularly see is of Windows Update. All you have to do is turn it on, and you will get the latest security and other important updates from Microsoft automatically. In most cases, the end-user does not have to do a thing. (This Wikipedia article gives an overview of the evolution of this facility from the Windows Update web site to the Critical Update Notification Tool/Utility and then to the current Automatic Updates.)
Automatic software update is becoming an accepted constituent in many software products today – making it uncomplicated and effortless for the users to keep the software up-to-date always, without hassle of having to check for a new version, download, and install manually. Note that in both the above two cases, technology has become so mature that there is almost no risk to user’s data, and that, in both cases the underlying technology is very complex.
Coming to the PLM world, it is a different story all together. PLM upgrade projects can last from a few weeks to a few months, draining an IT department of financial as well as human resources. Agreed, enterprise solutions are not trivial to upgrade because of various factors involved but surely, the PLM vendors need to do something to cut down the time and effort needed. The answer to this issue lays both on economic and technical factors.
I would like not to believe that services revenues play a big role in extending the upgrade projects. PLM vendors (and a whole bunch of service providers too) publicly advertise their strategies and best practices for implementing PLM upgrades (though most of them sound more or less alike). Though the figures are hard to come by, but the revenue garnered from the services part of an upgrade project does add some level of bottom-line impact to the vendors. Services business is usually low margin (until a large percentage of work gets done offshore) – so even if the timelines are squeezed it should not make that big of an impact to the vendors.
From a technical point of view let us what a typical PLM stack looks like. The below example is of Teamcenter architecture:
Assuming requisite drivers like leveraging new platform capabilities or addressing issues with current PLM deployment are present, when an organization decides to go for an upgrade there would be several steps involved including planning, upgrade assessment, impact analysis etc. where there might not much opportunity to compress the timelines. The main prospect of compressing the timeline is during the execution phase and here is what I think vendors need to do:
Improving Performance of Upgrade Tools: Software efficiency halves every 18 months, compensating Moore’s Law – May’s Law. A large amount of time is usually taken when the database is getting upgraded. There is a need to focus on the bottlenecks and look beyond obvious optimization techniques – like targeting Algorithmic efficiency, Memoization etc.
Pre Upgrade DB Tools – A major part of time is consumed while making the database upgrade and is resolving issues with the data. Make tools available which customers can run much before the formal upgrade project starts. Let these tools give detailed reports on the problematic areas and how customers can take care of them.
Eliminate Manual Steps: Maximize automation and cut manual steps – This would also take care of human errors.
Body of Knowledge: A body of wisdom (domain ontology?)about upgrades could be made available. It wouldn’t necessarily make an upgrade easier or faster. As I noted earlier, there are published best practices for implementing PLM upgrades – the problem is that most of these best practices, as honest they are, has reached the point of platitude. There is a need to move beyond the clichés with upgrade do’s and don’ts grounded in practical customer project experiences.
Handle Customizations Effectively: Each company has certain unique processes and practices that lead them to a competitive position – This necessitates customization of the PLM solution to a varying degree. A highly customized global deployment in effect requires a re-implementation and data migration which is especially true for major global PLM installations. Tools need to be made available wherein customers can run them before hand and find and if possible fix potential issues with the code (like deprecated API’s, API changes etc).
Automate Testing – Data validation and performance tests come at the end stages of an upgrade. A bulk of such testing can be done using automated scripts run 24X7.
While upgrading a PLM deployment the FOTA way might not be possible, I think there are many opportunities to address the pain point customers face during such an undertaking. And it would be worthwhile to make things easy and customers happy! Everyone loves easy!
This is going to be a very short post :) I was reading a few articles late night this Memorial Day weekend, which caught my eye. First, one was about “MySQL at Twitter” by Twitter engineers Jeremy Cole (@jeremycole) and Davi Arnaut (@darnaut). As one of the largest users of MySQL, Twitter uses the database software to store most of the data its 140 million users generate.
Second article was about “MySQL and Database Engineering” by Mark Callaghan. Here are some interesting statistics about Facebook: “More than 125 billion friend connections on Facebook at the end of March 2012. On average more than 300 million photos uploaded to Facebook per day in the three months ended March 31, 2012. An average of 3.2 billion Likes and Comments generated by Facebook users per day during the first quarter of 2012. More than 42 million Pages with ten or more Likes at the end of March 2012.” And, to keep all these running Facebook uses MySQL.
One of the often-heard complaints about PLM is that the investment need is huge. Partially the blame also falls on the database licensing cost. In order to find out what the database licensing costs look like I tried to read Oracle’s Global Price List and their Software Investment Guide and SQL Server 2012 Licensing Overview – and ended up just getting bewildered. There are so many variants like Unlimited License Agreements, Processor licensing, Standard Edition Per-socket licensing, Enterprise Edition Per-core licensing, Named User Plus Licensing, Application Specific Full Use Licensing etc. I am pretty sure a IT Manager just ends up getting baffled as well!
So why not just switch to MySQL? There are two aspects to this:
Does the PLM vendor support MySQL?
Does the IT department have MySQL DBA’s?
I am not sure which PLM vendors do not support MySQL yet – but it should not be that hard for them if the demand is there from end customers. Also with experts from the likes of Facebook and Twitter open-sourcing their tweaks of MySQL it shouldn’t be that problematical either to get optimal performance out of MySQL provided an IT Department has MySQL DBA’s.
As a side note, Sun Microsystems bought privately held open-source database maker MySQL in 2008. In addition, when Oracle bought Sun in 2009, MySQL came with the acquisition. So Oracle owns MySQL!
I would like to know from my readers what prevents MySQL from being extensively used in a corporate environment.
I write this post based on my latest experience of buying two products from a leading retailer and then returning them both after scarcely being able to use either one. Every one of us has experienced this common scenario on a regular basis.
The first product was a home water filter made by Fortune 100 chemicals major and the second product was a piece of furniture made by leading producer of “ready to assemble” residential furniture.
In the first case, I simply could not get the product to work as anticipated. After tinkering for an hour or so with it I headed to their website – was extremely dismayed to find no product support/self-help available, had a long wait time to get to their customer care and there were no FAQ’s on what could go wrong and how to fix such issues. It seemed like the “big-company” had forgotten its retail customers or were not very inclined to serve them. Therefore, I took my receipt and headed back to the store.
Putting PLM in the cloud is not enough – What matters most is enterprise application integration.
Cloud Based PLM
There have been a number of announcements lately of putting PLM in the cloud. It started Dassault making its V6 platform available from AWS last year; it gathered much more steam with Arena’s launch of PDXViewer and mostly Autodesk’s launch of PLM 360. I haven’t personally tested PLM 360 yet, but take it from reviews of Deelip and Oleg that end-user experience is pretty great.
The fact that an on-premise PLM implementation is expensive and a time-intensive process remains true (which requires software licenses and a considerable infrastructure and consulting investment) and since cloud PLM solutions are maintained by the software provider, which means set-up is easy and requires no internal resources for updates/upgrades will endow manufacturers to see faster returns on investments. Also as Michael Driscollnotes “The cloud is a more fault-tolerant and flexible operating system than its predecessors. These two advantages derive from the cloud’s two hallmark features: it is both virtualized and distributed. Because it’s virtualized, failing hardware can be upgraded or swapped out, and virtual processes can be migrated to new machines with little end-user impact. Because it’s distributed across thousands of commodity boxes, services’ compute and bandwidth needs can be scaled up or down, and disk storage limitations are almost an anachronism.”
Readers following the Consumer Electronics Show (CES 2012) at Vegas this year would have unmistakably noted car makers showing off their latest and greatest gizmos. As MSN noted in its editorial: “Audi, Chrysler, Ford, Kia, Mercedes-Benz and Subaru all used North America’s largest trade show to demonstrate advances in in-car infotainment, showcase next-generation alternative-powertrain vehicles and offer conceptual visions of how technology will power cars not only a few years from now, but well into the future.” And most of the latest innovation in automobiles is being done using software. Wired magazine in its article “Software Takes On More Tasks in Today’s Cars” notes “According to one study, 90 percent of the innovation we’re seeing within the auto industry is driven by advancements in software and gadgetry.” IEEE Spectrum ran an article some time back titled “This Car Runs on Code” where it put out some in-depth statistics: “It takes dozens of microprocessors running 100 million lines of code to get a premium car out of the driveway, and this software is only going to get more complex.”