Archive for November, 2020
Sunday, November 15th, 2020
The digital representation of production plants is a prerequisite for material flow simulations and the analysis of bottlenecks when planning or making changes to the plants. PROSTEP now offers its customers a new service for this purpose: The automated analysis of 3D scan data and conversion of this data into digital twins that can be used to simulate production processes.
Up until now, a considerable amount of effort was needed to create digital twins for existing plants, which made their use difficult for small and medium-sized companies in particular. As part of the DigiTwin research project, the Institute of Production Engineering and Machine Tools at the Leibniz University of Hanover, together with PROSTEP, isb – innovative software businesses and Bornemann Gewindetechnik, has developed a service concept for deriving simulation models from scans of the factory floors largely automatically. The project, which is being funded by the “SME innovation: Service research” initiative of the German Federal Ministry of Education and Research, is nearing successful completion.
The aim of the research project was to use object recognition to convert, with a maximum of automation, the 3D scan data from production into digital models that can be used to perform simulations. Both standard scanners and stereo image cameras were tested for use as devices for scanning the systems. The experts from PROSTEP’s data management team transformed the “dumb” point clouds of machines, robots and transport equipment into “intelligent” CAD models that can then be used to simulate the manufacturing processes.
Strictly speaking, the scan data, or the network geometry derived from it, was not converted directly into CAD models, but was first analyzed using artificial intelligence methods and machine learning to identify plant components for which simulation-capable CAD models already exist. Setting up the corresponding library was a key part of the project. Only if there were no equivalent in the library was the scan data converted into CAD models, parameterized using feature recognition methods and prepared for simulation kinematically. The experts from PROSTEP use SolidWorks to develop the CAD models; they can however be output in any CAD system.
The objects on the factory floor were divided into seven categories to make classifying them easier. A so-called bounding box for each object was used to accurately determine the position and orientation of the objects in space and to check the results of object recognition. Depending on the category, 80 percent of the objects could be identified automatically and stored with the corresponding CAD models, which dramatically reduced the effort needed to create the digital twins for material flow simulations.
The sustainability of the innovative service concept from the DigiTwin project is guaranteed: PROSTEP has now expanded the range of services offered by its OpenDESC.com data transfer and conversion service to include the automated analysis of 3D scan data.
By Josip Stjepandic
No Comments »
Friday, November 13th, 2020
With OpenCLM, PROSTEP presents a lightweight, easy-to-configure web application for traceability and cross-discipline configuration lifecycle management. This makes it easier for companies to assess project progress in the individual domains during system development and trace the development steps and deliverables.
Traceability and configuration lifecycle management (CLM) are major challenges for companies in manufacturing industry. They are not only important when it comes to meeting legal requirements but also for getting to grips with increasingly complex development projects. If they do not know the current status of the development steps and the maturity of and dependencies between the deliverables, the people involved in development often have to spend a great deal of time and effort on coordination or laborious fact-finding before they can make informed statements about the progress of a project, assess the impact of changes and implement changes efficiently. This can have a significant negative impact on efficiency. The traceability of the relationships between data from the operating phase (digital twin) through to valid configurations of the associated digital models (digital master) is also a prerequisite for making it possible to locate the source of errors (defects) in systems correctly and identify their cause faster.
In general, ensuring traceability and CLM involves a considerable amount of time and effort for collecting and linking data, which in practice is distributed across numerous different data silos. This is why holistic solutions and platforms from individual providers do not fulfill requirements relating to traceability. OpenCLM bridges the heterogeneity of the IT system landscapes (ALM, PDM, ERP, etc.) with the help of OpenPDM connectors and also provides components for integrating process, project, task, maturity, configuration, change and release management.
The extensible data model maps the most important industry-specific standards and maturity models and provides the basis for creating reusable process and project templates. The latter ensure consistent, traceable project and product documentation that complies with standards such as DIN ISO 10006, ISO 9001 / IATF 16949, ISO 13485, (Automotive) SPICE, ISO 26262, ISO 10007, ISO 15288, EN 9100.
OpenCLM displays the data, which is linked from different sources, with metadata such as status, change date, owner, etc. in a clear and concise cockpit, so that it can be easily compared with or linked to other data statuses. Baselines, i.e. views of the data statuses that are valid at a given time together with the link information, can be generated at certain points in the product development process. If necessary, these baselines can be exported, exchanged or archived in standardized formats such as 3D PDF or STEP.
OpenCLM also coordinates distributed and cross-domain changes. A distributed change process can trigger subordinate, domain-specific change processes or, if these already exist, it can be linked to them. If the change process refers to a baseline, the solution also makes it easier to define the scope of the change and provides support for impact analysis. The range of functions that OpenCLM offers not only provides companies with support for certification and audits, but also for planning and controlling complex, distributed development projects, validating maturity or improving product quality and error management.
Further information about OpenCLM
you can find under
openclm.prostep.com
By Fabrice Mogo Nem
No Comments »
Wednesday, November 11th, 2020
The increasing complexity of products and processes is making it more and more difficult for companies to keep track of the current status of their development projects and seamlessly trace all development steps. The traceability of all relevant information across system and domain boundaries not only requires new tools but also opens up new perspectives. The newly created knowledge about relationships can be used over the entire product lifecycle through to the operating phase for new use cases and business models. This makes traceability the key to digitalization.
Traceability refers to the ability to trace at any time how the requirements placed on a system have been implemented, simulated and validated and which artifacts are associated with which requirements. Although this is not a new topic, the trend towards smart products means that it has grown enormously in importance. Electronics and software control an ever increasing number of safety-critical functions, not only in automobiles but also in other products, and these functions need to be validated virtually. One example would be autonomous vehicles. Testing every imaginable driving situation on the road would be far too time-consuming and risky. If their vehicles are to be certified for use on the road, carmakers and their system suppliers must, for example, be able to provide detailed proof of which situations they have simulated under which circumstance and with which tool chains.
Traceability is not only an issue for companies in the context of functional safety and compliance with the associated traceability requirements; it also plays a key role when it comes to digitalizing their business processes. Companies in every industry are trying to design their processes more consistently and therefore have to cope with a growing flood of digital information. The real challenge is not managing the digital information but rather managing the relationships and dependencies between the individual information objects. It is, for example, impossible to reliably assess the impact of changes without this knowledge about relationships.
Especially in complex development projects, e.g. in the shipbuilding industry and the mechanical and plant engineering sectors, project participants from different disciplines and domains today need to expend a great deal of communication effort to determine which data and documents correspond to the current development status – and all the more so since they often belong to different organizations. At certain milestones or quality gates, they have to prepare and collate the deliverables in what is a largely manual process in order to gain an overview of the progress being made in a project and possible deviations from the planning status.
Traceability is becoming increasingly important in the context of the digital twin and support for new, data-driven business models through digital twin applications. It establishes the link between the digital master or digital thread and the digital representative of the product that is actually delivered and provides the basis for enabling information from the operating phase to be fed back into and reflected in product development. Without this link, it is impossible – or takes a great deal of effort – to trace errors that occur during operation back to their possible roots in the development process, analyze them and thus eliminate them faster.
There are therefore many good reasons to explore the question of how to ensure traceability as efficiently as possible. Traceability is an essential prerequisite for providing evidence of compliance with the relevant standards and maturity models. It ensures greater transparency in the interdisciplinary product development process and plays a key role in speeding up product development and improving competitiveness thanks to innovative services. This benefits not only product developers, project managers, quality managers and service technicians, but ultimately also a company’s partners and customers.
Today traceability is made more difficult by the fact that the different disciplines and domains create and manage their information objects and development artifacts using hundreds of different IT systems, which are often only integrated in a rudimentary form. System landscapes are also changing very dynamically because new products call for new and better technologies for their development and production. Therefore a key requirement for any solution for ensuring traceability is that it functions independently of the IT systems used.
It is our opinion that traceability can no longer be ensured using conventional integration approaches – at least not with an acceptable level of effort. Instead of replicating the relevant data in a higher-level system, the approach that we are pursuing involves the lightweight linking of information objects located in different source systems. What is crucial here is that we do not generate the links retrospectively but rather determine from the start which information objects are to be related to each other and how, while at the same time taking account of the relevant standards and maturity models. This is the fundamental difference between our approach and other linking concepts. Please contact us if you would like to learn more.
By Karsten Theis
No Comments »
Sunday, November 8th, 2020
Setting up an IT landscape from scratch is a challenge, but at the same time it offers an opportunity to do things completely differently. OSRAM Continental took advantage of this opportunity and moved its entire IT infrastructure to the cloud. The two-year-old joint venture uses PROSTEP’s cloud-based data exchange service to exchange product data with customers and suppliers.
Intelligently networked lighting that automatically adapts to the driving situation and improves communication between the driver, the vehicle and the environment is the future of automotive lighting. OSRAM Continental’s mission is to shape this future. The joint venture between OSRAM and Continental was set up in the middle of 2018, with each company holding a 50-percent stake. It combines the expertise and experience of the two parent companies in the fields lighting, electronics and software.
With a workforce of 1,500, the joint venture develops, manufactures and markets solutions for front and rear headlights, individually controllable interior lighting, and innovative projection systems that provide greater driving safety and comfort. In the future, they will play a key role – especially when it comes to the safety of autonomous driving. Networked light control units that link the different sensor signals from a vehicle with information from other vehicles or the environment provide the basis for this type of intelligent lighting concepts.
OSRAM Continental is headquartered in Munich and maintains a presence at 15 locations in nine countries worldwide. Product development is distributed over Europe, America and Asia, with the largest European development site situated in Iaşi, Romania. The mechanical engineers work primarily with CATIA, but they also use other CAD systems that are connected to the PLM solution SAP PLM via SAP ECTR depending on the project and customer requirements involved. Most of the applications run in a virtual desktop infrastructure, i.e. only views are streamed to the users’ screens.
State-of-the-art IT infrastructure
“With the exception of a few applications, everything runs in the cloud. We wanted a state-of-the-art IT infrastructure,” says Catalina Man, Team Lead IT Operations at OSRAM Continental and, together with her team, responsible for providing support to OpenDXM GlobalX users, among other things. “The biggest hurdle encountered on the way to the cloud was changing the employees’ mindset. We had to convince them that cloud services work just as well as solutions that are installed locally. The issue of security was also a challenge. It can’t be left to the cloud provider alone but instead requires a team that concerns itself with the network and infrastructure. The limited human resources available to support the cloud environment were therefore another challenge.”
In line with its general cloud strategy, OSRAM Continental decided to use the OpenDXM GlobalX data exchange platform from the cloud. As Catalina Man says, there were a number of reasons for choosing PROSTEP’s SaaS solution. “We needed a solution for all the locations that could be implemented quickly and which we could use to securely exchange not only CAD files but also, for example, product marketing videos. We wanted to work with well-known providers, and we were familiar with PROSTEP from our parent company Continental. We also knew that the company offered its data exchange platform from the cloud and then discussed our requirements. It ended up that OpenDXM GlobalX was the best fit for us because the software is very flexible and can be implemented quickly.”
The SaaS solution is installed in the cloud infrastructure provided by the DARZ data center in Darmstadt, which has been certified by the Federal Office for Information Security (BSI) in accordance with CIP (Critical Infrastructure Protection) and meets all the requirements stipulated within the framework of DIN/ISO 9001 and 27001 and the European General Data Protection Regulation (GDPR). With its state-of-the-art architecture, infrastructure and building technology, DARZ ensures the highest possible level of protection and availability of data. Catalina Man confirms that all OSRAM Continental’s locations access the cloud infrastructure provided by the Darmstadt data center directly via the Internet and that response time behavior is good.
Integration of an OFTP application
The SaaS solution is multi-client capable and is also used as a multi-tenant application by numerous other customers. OSRAM Continental, however, decided on their own instance as it exchanges large volumes of data with carmakers using the OFTP2 protocol. Which is why PROSTEP integrated T-Systems’ OFTP application rvsEVO in the customer’s data exchange service. It automatically prepares the data to be exchanged for OFTP2 communication when the corresponding recipients are selected. However, it can only be used in combination with a private cloud or a cloud of its own for data protection reasons and due to technical restrictions.
Aside from the OFTP integration, users can use the SaaS solution practically “out of the box”, which makes updates easier. “The software supported almost all our use cases from the word go,” says Catalina Man. PROSTEP implemented an important adaptation for OSRAM Continental that has already been incorporated in the standard application. The size of the WebSpaces for individual users and user groups can be defined individually within the storage quota for the licensed number of users and can also be changed. This was previously technically feasible but had to be performed by PROSTEP support staff. Now customer administrators can do this themselves using the intuitive web interface.
Approximately 250 internal and almost 100 external users are currently registered as exchange partners at OSRAM Continental. The internal users are primarily R&D engineers, but an increasing number of employees from other departments are also sending and receiving sensitive data securely via the cloud platform, which logs all exchange processes in a way the ensures they can be traced. The solution has registered over 6,000 uploads and downloads involving a data volume of more than 500 gigabytes this year alone.
Intuitive web interface
All key data exchange functions are made available to users via an HTML5-based web interface. With the help of external user interface design specialists, PROSTEP has made this interface more intuitive and ergonomic so that even occasional users can use the application without the need for regular training courses. “The new interface has made the application much easier to use,” says Catalina Man. “At first users had a lot of questions, which is why we worked hard to ensure that they understand the tool and feel comfortable using it. We asked PROSTEP to expand existing documentation to include easy-to-understand explanatory videos for example.”
Although the data exchange service is primarily used by developers, OSRAM Continental has not integrated the SaaS solution directly in its PLM environment even though this is technically feasible. “We decided to first make sure that the application is stable for the users,” says Catalina Man. Engineers normally export their CAD data from SAP PLM or SAP ECTR to an appropriate directory, log in to OpenDXM GlobalX using the Web Client, select the files to be exchanged and the respective recipient, and upload them to the platform. Both the files and the exchange processes are encrypted, thus ensuring a high level of security.
Employees who like working with MS Outlook and use it extensively can now initiate data exchange directly from their e-mail program. At the beginning of this year, OSRAM Continental activated the Outlook integration – which is actually a multi-cloud integration because the Office programs run in a different cloud environment – for certain users. Catalina Man says that although connecting across cloud boundaries isn’t a problem, it requires the installation of additional software on the PCs, which is why most users cannot install the integration themselves.
Falling total cost of ownership
The main benefit of the SaaS model for OSRAM Continental is the fact that the company did not have to deal with purchasing and implementing hardware and software. This meant that the data exchange solution was able to go live quickly. It can be scaled up or down as the number of users increases or decreases. No or significantly fewer IT administration and support staff is required. Maintenance of the IT infrastructure and software updates are included in the price, which reduces the total cost of ownership or at least makes it easier to calculate. In a new company, where the entire IT organization has yet to be established, internal resources are scarce. A cloud-based, out-of-the-box solution is therefore the perfect solution.
“For me, the key advantage of the SaaS solution is its flexibility, which makes it possible to respond to new requirements quickly,” says Catalina Man, who is very happy with the support PROSTEP provides and the quality of the support. Review meetings, at which the experts from PROSTEP explain new features and make note of new requirements, are held twice a year following the updates. “The team is very flexible and implements our requirements quickly,” explains Catalina Man in conclusion. “That is crucial to the success of our collaboration.”
By Nadi Sönmez
No Comments »
Wednesday, November 4th, 2020
It is necessary to take a holistic view of the development, production and operation of complex product-service systems. The Fraunhofer IAO is developing technologies and methods to do this as part of the strategic research program Advanced Systems Engineering (ASE). The director of the institute, Professor Oliver Riedel, describes the scope of the approach and the challenges posed by implementation.
Question: Professor Riedel, what is meant by advanced systems engineering and what makes it different from systems engineering?
Riedel: In the past we haven’t been able to establish systems engineering in such a way as to deliver really good solutions for the combination of product, process and service. Systems engineering works reasonably well for mechatronics and software engineering but often fails in industrial practice because of the complexity of the approach and the organizational structures it requires. Advanced systems engineering (ASE) is designed to address precisely these issues and to better support companies when it comes to implementation.
Question: Does the term ‘advanced’ refer to systems or to engineering?
Riedel: It can be understood as a triangle that brings together the three aspects of advanced systems, systems engineering and advanced engineering. Advanced systems describe increasingly complex and networked market services, systems engineering describes the coordination and structuring of the cross-functional, interdisciplinary development of complex systems, and advanced engineering deals with best practices with regard to methods and tools in engineering, as well as agile approaches and creativity techniques. The aim is to break down domain silos and enable interactive collaboration in both product engineering and production, in other words to achieve a holistic view of the innovation processes.
Question: Are you placing the primary emphasis with ASE on dovetailing product development with production and production planning?
Riedel: Not quite, we are going even further. After all, products and processes are still largely developed in-house. In the case of product-service systems, the service is provided after the start of production, when the product is already on the market. And things that change the product without any physical add-ons, such as big data analytics or product updates over the air, play a role here. The system must be described as a coherent whole, in order to be able to use it in product development, virtual try-out, the digital factory and, most importantly, in the field of service.
Question: And what does that mean in concrete terms?
Riedel: Let’s take Homag and its highly complex custom systems for the woodworking industry as an example. The company has adopted an ASE approach, in order to achieve one hundred percent mapping of the digital twin. However, this twin does not reside in the development department but instead accompanies production virtually. If the owner of the machine wants to use the machine to run a new production program, they can try it out virtually on their system. The digital twin is used as a service during operation.
Question: Implementing systems engineering in development is already a complex task. Are you not merely compounding the complexity by integrating production and service?
Riedel: Of course, the notion of service is an additional dimension but that doesn’t necessarily make things more complicated; it simply brings together those processes that are still segregated today. There must be a single source of truth for the entire system. In other words, the system model must be linked with the service structures when the system is really running. Nowadays, we don’t get the data back from the field so that we can map it to the product and offer additional services. But the product lifecycle doesn’t come to an end when the product is delivered. We need redundancy-free storage of data throughout the entire lifecycle, despite the user having a different view of this data from that of a developer.
Question: In principle, ASE requires that everything should be defined from the outset. Doesn’t that clash with the philosophy of the agile approach?
Riedel: No, I don’t think so. In the model-based approach we have for simulation technology what is known as black boxing. I can create certain components as black boxes with inputs and outputs without having defined them in full detail, either because I don’t yet know the solution or because I’m not interested in it at present. I don’t need to know the internal workings of each black box from the word go. I just know that it has to be there. If you apply this paradigm broadly, you easily get to agility. The only question is whether there are enough description languages that can cope with the various modeling depths in the simulation.
Question: Is ASE model-based by definition or are there other approaches?
Riedel: I’ll answer that with another question: Is there anywhere you can still do without models today? Yes, it definitely has to be model-based, because we would otherwise be unable to achieve coverage of all the phases or the required depth.
Question: What models are required for this? If you want to dovetail product, process and service, surely you have to start with the requirements?
Riedel: Exactly, this is one of the issues that is the subject of intense discussion. Can we achieve this with one data model across the entire lifecycle or do we link models? From my practical experience in industry, I would prefer linked models because a single model would eventually become too much for me. Not only that, I’m also no longer interested in the fine details of the requirements model after a given phase. In order to go into production, I need other models, but it must be possible to link them to each other, in much the same way as the different views in PDM. And when I go out into the field, it is again the case that I no longer need certain details. But I must be able to establish relations in both directions. In other words, I am linking my models but the content doesn’t have to be a permanent part of every version of the model.
Question: Can ASE work in conjunction with external suppliers? Don’t you reveal too much product know-how if all the information is contained in a single model?
Riedel: That’s a very good point, and it brings us to the operational use of such models. Until now, we’ve only talked in the abstract about what they might look like in a perfect world. The management of roles and permissions, which is already a hot topic in the distributed development process, becomes an even greater challenge when the network is extended to cover the users of the product as well.
I clearly don’t want users to see everything, but just the relevant information and structures. Ensuring the management of roles and permissions beyond the current system boundaries is a truly intriguing issue.
Question: Are there any companies that have already implemented ASE right through to service provision?
Riedel: Unfortunately, there are very few due to the fact that there are three major hurdles to overcome. Firstly, there are technical hurdles such as model-based description languages, but these will be overcome at some point. Then there are organizational hurdles within companies. Perhaps these will begin to fall away a little as a result of the coronavirus, because many people realize that we would be much further down the road if we already had connectivity across domains. Companies are not yet organizationally geared up to plan and control product lifecycle support. And then there are the human hurdles. To start with, you have to get engineers onboard in your journey into the next dimension and get them to understand the growing complexity that comes with it. At the moment, I think that the organizational and human hurdles are greater than the technical ones.
Question: What is the focus of the ASE research program at Fraunhofer IAO?
Riedel: We have decided on six areas of study, which we work on in two directions. The first of these areas is model-based system development, including cross-domain aspects such as data analysis, in other words, the extension of current methods. The second is value-stream-oriented product design, i.e. the use of process information from production for product design. For this to succeed, the value stream must be defined at an earlier stage than it is currently. The third area is the evaluation of data from production planning and production using artificial intelligence (AI). And the fourth area also has to do with AI, but in this case, it is about evaluating the large volumes of data from the product engineering process in order to provide product developers with recommendations for best practice. The fifth area deals with system configuration, i.e. how to configure not only the product but also the process and the service, for example in order to be able to assess the impact that changes made to the product may have on the process. The last area we are investigating may be somewhat old-fashioned, but we must have another look at the PLM systems. They are still not in a position to support ASE.
Question: Where, for example, should the product-specific process information be managed? This issue is actually more closely related to MES.
Riedel: We undoubtedly need MES functionality to be integrated into PLM, either via interfaces or by running it on the PLM infrastructure. Assuming that MES and PLM systems were to evolve towards service-oriented architectures, the existence of x different systems wouldn’t be tragic because they would be based on a data repository, and this would ensure that the models were linked and would guarantee their consistency and integrity. However, this runs quite counter to the architectural pattern of today’s PLM systems.
Question: A moment ago, you spoke of ‘two different directions’. What did you mean by that?
Riedel: We have had long discussions with the Ministry of Economic Affairs of the State of Baden-Württemberg, Germany, about how we can ensure that the issues are quickly made tangible for local companies. So we set up a mobile lab in which we are using a relatively simple product with a service feature and a production system that can be quickly understood to illustrate the interaction of engineering, production processes and service. The lab is housed in a shipping container, which is currently standing on our premises due to the restrictions resulting from the coronavirus pandemic. The other direction is to build a similar lab at the Fraunhofer IAO, but this one is oriented more towards research.
Question: What insights can companies expect from this plug-in lab?
Riedel: To start with, they can quickly grasp exactly what is meant by model-based system development or value-stream-oriented product design. The idea is that they can feed their own data into the lab equipment and directly identify the added value. We want to use a simple example to demonstrate to companies how ASE works.
Professor Riedel, thank you very much for talking to us.
(This interview was conducted by Michael Wendenburg)
About Professor Riedel
Professor Oliver Riedel (born 1965) has been head of the Institute for Control Engineering of Machine Tools and Manufacturing Units (ISW) at the University of Stuttgart since November 2016 and is also a director of the Fraunhofer Institute for Industrial Engineering (IAO). Professor Riedel studied Cybernetic Technology at Stuttgart Technical University, where he completed his doctorate at the Faculty of Engineering Design and Production Engineering. He has been working on the principles and practical application of virtual validation in product development and production for over 25 years. Professor Riedel is married with one grown-up son.
Further information is available at www.iao.fraunhofer.de
No Comments »
Wednesday, November 4th, 2020
Continental AG, one of the world’s largest automotive suppliers, has been using PROSTEP’s OpenPDM suite for a year now. Our platform for PLM integration, migration and collaboration not only reduces the data migration effort for the carve-out of the subsidiary Vitesco, but also supports Continental in harmonizing its heterogeneous PLM landscape.
With more than 241,000 employees and a turnover of 44.5 billion euros in the financial year 2019, Continental is one of the largest automotive suppliers in the world. The company has transferred the entire powertrain sector, including electrical drive technology, to an independent subsidiary, which is to be floated on the stock exchange under the name Vitesco Technologies. In order not to delay the carve-out by a long system selection, Continental decided to use the same systems and configurations as before at Vitesco as far as possible. However, Continental’s PLM system landscape is not yet uniform throughout the group. Although PTC Windchill is the most widely used PLM system, some group divisions still work with SAP PLM in conjunction with CATIA integration SC5.
PROSTEP’s initial task was to export the product data relevant to drive technology from the existing Windchill installation at Continental and migrate it to the separate instance of Vitesco. With the help of OpenPDM and the standard connectors to PTC Windchill, this process could be automated. The logging of all exchange processes also ensured that the data arrived in the target system in controlled quality.
Continental also intends to use our PLM integration platform to harmonize the historically grown PLM landscape, gradually replacing the SAP PLM installation by PTC Windchill. Although the company has already had the majority of its inventory data migrated by an engineering service provider in the context of a bulk migration, the processing sovereignty was not always transferred to Windchill. There is still the requirement to use SAP PLM and to leave it to the users to decide when to transfer the data with data sovereignty to Windchill. For this purpose, we have created an integration between SAP PLM and Windchill on the basis of OpenPDM, which provides users with the necessary functions for ad-hoc transfer.
OpenPDM is used at Continental in two different scenarios, which show how flexibly the integration platform can be adapted and used to meet different requirements. The Windchill-Windchill migration prior to the Vitesco carve-out is a highly automated solution, while the SAP PLM-Windchill transfer is controlled by the users. In both cases, however, the actual data transfer is fully automated as an asynchronous process in the background.
By Bernd Döbel
No Comments »
|