I’ve been away on vacation for the last few weeks, and will be on the road again for the next few weeks. In the meantime a few more questions have been added to the “Ask The Experts” section.
Speaking of questions, I had a query from Gary Mintchell regarding comments he’s heard about how OPC UA should be ‘simple’ like DA. (Look for some upcoming OPC discussions from Gary at Automation World).
I’d thought I’d share some of my thoughts on the topic, since I’ve come across this more than once myself.
The first thing is what your perspective is on the matter. There is a difference between being “simple to use like DA” and “simple to develop like DA”.
The end-user experience of starting an OPC UA client application, discovering the list of available servers, connecting to the OPC UA server, browsing for available points and subscribing to value updates does not change very much from what we do today with classic OPC. What does change is that this process now has more built in security, reliability and integration of other data models like history, alarms and conditions and programs. Also the infrastructure is no longer tightly dependant on Microsoft operating systems and the challenges of DCOM. Of course these additions mean that OPC UA product developers now have some more work to do.
To put it in everyday concepts: The mechanics of driving an ’82 Dodge K Car, and a 2009 Electric Tesla Roadster are the same, but how they are designed, manufactured, maintained and work under the hood is VERY different. The same applies to OPC DA and OPC UA.
With OPC DA, a C++ programmer with a good understanding of COM could download the 200 page OPC DA 3.0 specification and basically start coding. A bit of a simplification, but that one document bounded what the programmer had to implement. A programmer sitting down to develop an OPC UA server opens a layered set of specifications broken into thirteen Parts. These documents are purposely described in abstract terms and in later parts are married to existing technology on which software can be built. They also have to consider options such as programming language implementation, security, information model etc. (Not saying that’s a good or bad thing, just stating some facts.) For many people, their first reaction is ‘this is complex’. The discussion on ‘how simple or complex’ OPC UA is really a reflection on the difference in scope between OPC DA and OPC UA.
The classic OPC specifications were COM implementations therefore the constraints of COM dictated many implementation details, including target operating system, discovery mechanism, wire protocol, security etc. 10 years ago, developers were mostly concerned with solving the interoperability problem, so accepted these constraints in order to achieve an acceptable standard. As the OPC Foundation website states “The existing OPC COM based specifications have served the OPC Community well over the past 10 years, but as technology moves on so must our interoperability standards.”
Users and developers now require more, several factors influenced the decision to create a new architecture:
Microsoft has deemphasized COM in favor of cross-platform capable Web Services and SOA (Service Oriented Architecture)
OPC Vendors want a single set of services to expose the OPC data models (DA, A&E, HDA …)
OPC Vendors want to implement OPC on non-Microsoft systems, including embedded devices
Let’s look at each of these factors, and what impact that has on the scope of OPC UA.
Choosing Microsoft COM as the basis for classic OPC meant that many decisions were already made for the developer, but this also brought with it all the configuration pains of DCOM, close reliance on Microsoft platforms and limited ‘web’ application integration. Selecting a Service-based model for OPC UA provides cross-platform functionality, and removes the reliance on any one vendor or technology. In ten years from now, when the protocols used by Microsoft, IBM or Linux change (and they will), then the OPC UA applications will not need to be re-written, only the underlying mappings need be changed. This abstraction adds scope to OPC UA that OPC DA did not have, but on the other hand by not being bound to any particular technology means that the OPC UA specifications will be timeless.
OPC UA stands for ‘Unified Architecture’, which encompasses all the classic OPC specifications: DA, HDA, A&E, Commands and Complex Data. So comparing OPC UA to OPC DA is a bit of apples to oranges. The base OPC UA specifications contain the common components to integrate all these features. Again this is a larger scope than OPC DA and developers need to understand what things included in the base and what things are Access Type specific. That said, not every OPC UA server will be required to implement all 13 Parts. OPC UA provides multiple ‘Profiles’ that allow developers to choose the right level of functionality for their application, yet still ensure that the base level of interoperability exists will all OPC UA products.
OPC UA has been designed to be cross-platform and scalable from embedded devices all the way to Enterprise spanning applications. Offering this level of flexibility while at the same time guaranteeing a usable degree of interoperability means developers must make some decisions on target programming language (C, .NET, Java) and communication stack (Binary, TCP, XML) their OPC UA products will support. In classic OPC, COM dictated these things, but with OPC UA developers have more choices. The OPC Foundation provides multiple SDK, communication stacks and sample code to accelerate adoption, but some vendors may choose to implement these lower layers on their own.
All these factors put together means the since OPC UA offers all the functionality of the classic OPC specifications and new features plus removes many existing constraints, that the structure and depth of material to absorb in learning OPC UA is harder than the OPC COM specifications. Or as some people say “OPC UA is not as simple as DA”.
The focus of the OPC UA Working Group over the last few years has been to ensure that specification and supporting deliverables met all the criteria discussed above, while ensuring certifiable interoperability and backwards compatibility and providing increased reliability and security. Producing a ‘simple OPC UA quick start guide for the new developer’ was not a main priority. Now that the specifications are nearing final completion, the Early Adopter team validates that things work as expected when the ‘paper become code’, and OPC UA vendors are developing their own products, the priorities are changing.
The next phase of OPC UA is ensuring that developers have what they need to successfully implement and adopt OPC UA. There is a large segment of the OPC community saying “As a first step we want to just provide our existing OPC functionality on the OPC UA infrastructure. What do I need to know to do that?” It’s not really a matter of ‘changing’ the OPC UA specifications to ‘make it simpler’, rather it’s presenting the specifications, documentation and code deliverables in a form that meets this important first step use case.
That is the focus of the newly formed “Accelerated Adoption Working Group”. This group is working to create the documentation, OPC UA Profiles and jump start code kits that allow product developers to quickly understand what aspects of OPC UA are required to duplicate their existing classic OPC functionality. These implementations will still have all the core components needed for interoperability and for added extended functionality in the future.
Under the hood OPC UA is still a powerful ‘Swiss Army Knife’, but if all you want to do is cut something with the big blade, here are the steps you need to follow. You don’t need to know how the cork screw works or where it is. However if you want to use it in the future, you don’t need to build a new knife, the functionality is there waiting to be opened.
Those interested in learning more of OPC UA should check out “OPC UA: 5 Things Everyone Needs To Know”