I still remember when I took my first cell phone, an “indestructible” Nokia 3310, to the OEM service center to get it “flashed” when it started showing problems. The process took 3 days. A year or two later (when I updated to a newer model) I could do this all by myself with a USB data cables and a desktop software from the device maker. Now-a -days everything is done “over-the-air” – courtesy FOTA (Firmware Over-the-Air) and a bunch of other related technology like FUMO (Firmware Update Management Object), SCOMO (Software Component Management Object), OTA (Over-the-air programming) etc. which enables remote operations, such as install, update or remove for software components in a device such as a cell phone or a PDA.
Another common automatic update scenario most people regularly see is of Windows Update. All you have to do is turn it on, and you will get the latest security and other important updates from Microsoft automatically. In most cases, the end-user does not have to do a thing. (This Wikipedia article gives an overview of the evolution of this facility from the Windows Update web site to the Critical Update Notification Tool/Utility and then to the current Automatic Updates.)
Automatic software update is becoming an accepted constituent in many software products today – making it uncomplicated and effortless for the users to keep the software up-to-date always, without hassle of having to check for a new version, download, and install manually. Note that in both the above two cases, technology has become so mature that there is almost no risk to user’s data, and that, in both cases the underlying technology is very complex.
Coming to the PLM world, it is a different story all together. PLM upgrade projects can last from a few weeks to a few months, draining an IT department of financial as well as human resources. Agreed, enterprise solutions are not trivial to upgrade because of various factors involved but surely, the PLM vendors need to do something to cut down the time and effort needed. The answer to this issue lays both on economic and technical factors.
I would like not to believe that services revenues play a big role in extending the upgrade projects. PLM vendors (and a whole bunch of service providers too) publicly advertise their strategies and best practices for implementing PLM upgrades (though most of them sound more or less alike). Though the figures are hard to come by, but the revenue garnered from the services part of an upgrade project does add some level of bottom-line impact to the vendors. Services business is usually low margin (until a large percentage of work gets done offshore) – so even if the timelines are squeezed it should not make that big of an impact to the vendors.
From a technical point of view let us what a typical PLM stack looks like. The below example is of Teamcenter architecture:
Assuming requisite drivers like leveraging new platform capabilities or addressing issues with current PLM deployment are present, when an organization decides to go for an upgrade there would be several steps involved including planning, upgrade assessment, impact analysis etc. where there might not much opportunity to compress the timelines. The main prospect of compressing the timeline is during the execution phase and here is what I think vendors need to do:
- Evaluate Platform Bloat: Analyze if the PLM platform has bloated over the years – Featuritis/Second-system effect always makes it painful to upgrade easily.
- Improving Performance of Upgrade Tools: Software efficiency halves every 18 months, compensating Moore’s Law – May’s Law. A large amount of time is usually taken when the database is getting upgraded. There is a need to focus on the bottlenecks and look beyond obvious optimization techniques – like targeting Algorithmic efficiency, Memoization etc.
- Pre Upgrade DB Tools – A major part of time is consumed while making the database upgrade and is resolving issues with the data. Make tools available which customers can run much before the formal upgrade project starts. Let these tools give detailed reports on the problematic areas and how customers can take care of them.
- Eliminate Manual Steps: Maximize automation and cut manual steps – This would also take care of human errors.
- Body of Knowledge: A body of wisdom (domain ontology?)about upgrades could be made available. It wouldn’t necessarily make an upgrade easier or faster. As I noted earlier, there are published best practices for implementing PLM upgrades – the problem is that most of these best practices, as honest they are, has reached the point of platitude. There is a need to move beyond the clichés with upgrade do’s and don’ts grounded in practical customer project experiences.
- Handle Customizations Effectively: Each company has certain unique processes and practices that lead them to a competitive position – This necessitates customization of the PLM solution to a varying degree. A highly customized global deployment in effect requires a re-implementation and data migration which is especially true for major global PLM installations. Tools need to be made available wherein customers can run them before hand and find and if possible fix potential issues with the code (like deprecated API’s, API changes etc).
- Automate Testing – Data validation and performance tests come at the end stages of an upgrade. A bulk of such testing can be done using automated scripts run 24X7.
While upgrading a PLM deployment the FOTA way might not be possible, I think there are many opportunities to address the pain point customers face during such an undertaking. And it would be worthwhile to make things easy and customers happy! Everyone loves easy!