MatrikonOPC OPC Exchange

Archive for August, 2007

All OPC Servers are Not Created Equal

Thursday, August 30th, 2007

Hardly a groundbreaking revelation but sometimes we need to be reminded of that fact.   Since OPC is a standard interface, and all OPC servers are implemented against the same specification, people tend to extrapolate that to mean all Servers are the same.  We know this is not true, since there are many other things to consider when selecting the right OPC server (or any software for that matter).

Software Quality is one consideration.  Security is another.  Dale touched on these topics on the Digital Bond blog.  (This is, of course where I got the inspiration for my topic.)  Other product considerations are device specific features.   These could include things like auto configuration, item management or redundancy support.    How robust is the product in terms of error handling and logging problems?  How user friendly are the interfaces?   The OPC specifications can only specify so much and many things other than compliance that separate one OPC vendors’ products from another.   Not to take anything away from Certification.  In fact, OPC Certification should be the FIRST thing you look for in an OPC product.   

I suppose the full quote would be “All OPC Servers are not created equal but should be treated as though they were under Compliance Testing”.   The paramount goal of OPC is interoperability.   The OPC Foundation Certification process is designed to ensure that when a user creates an architecture with OPC products from multiple vendors that it will work.  In a perfect world that would be the base starting point for all OPC products.   Those things beyond interoperability are in the hands of the OPC vendor and this is what makes them stand out in the crowd.

So take some time in determining which OPC product is right for you, and which vendor will offer you the services and support you need.  In the words of Napoleon Bonaparte:  “Take time to deliberate, but when the time for action has arrived, stop thinking and go in.”

Is Redundancy Enough?

Thursday, August 23rd, 2007

One of the common themes in robust OPC architectures is redundancy.  Redundancy in OPC products might be referring to several things:  the controller or channel level, the OPC server level or the client application level.  A redundant OPC architecture may incorporate one, some or all of these levels.  There are plenty of whitepapers, redundancy products or OPC server features that talk about these in detail, so I’ll not rehash it here. 

The question that needs to be asked, is in your particular architecture is redundancy enough?   Redundancy basically means that if one communication path fails, there is another path for the data to follow.  The thing is sometimes very bad things happen, even to the best in the business.  As Cisco recently found out.  Here’s an excerpt from their blog: 

Service to Cisco.com has been restored and all applications are now fully operational. The issue occurred during preventative maintenance of one of our data centers when a human error caused an electrical overload on the systems. This caused Cisco.com and other applications to go down. Because of the severity of the overload, the redundancy measures in some of the applications and power systems were impacted as well, though the system did shut down as designed to protect the people and the equipment. As a result, no data were lost and no one was injured. Cisco has plans already in process to add additional redundancies to increase the resilience of these systems.

Ouch.  Don’t get me wrong.  Redundancy is a very good thing, and should be considered in any critical architecture.  But should the unspeakable occur what happens to your data if you lose both channels?   There is a difference between providing a solution that maximizes your data availability time, and a solution that guarantees no data loss.  If you need the latter, then you should consider a buffering or store and forward solution.

The good news is there are OPC architectures that deal with that as well.  OPC HDA solutions can be used to buffer the OPC data and create standardized ‘store-and-forward’ solutions.  Another option would be to used OPC HDA to move the data in a pseudo real-time window.  There are different solutions to fit different requirements, but they all use standard OPC communication between the pieces.

It’s important to remember when designing any data communication system to think beyond what happens in single points of failure.  The worst that can go wrong, will go wrong.  Usually at the worst possible moment.  I know from experience.  My last name is Murphy after all.

OPC and DNP3

Monday, August 20th, 2007

For a blog that is dedicated to OPC,  the topic of other standards seems to come up a lot.  Gary touched a bit on why that might be in on of his posts (not surprisingly entitled Standards).  It’s because OPC is often used to drive standardization among other compatible specifications.  Over time many of these have emerged such as Modbus from discrete auto manufacturing, BACnet from HVAC and DNP 3.0 which was developed for the Electrical Utility industry.

DNP3 was designed to be an open, standards-based Interoperability protocol between substation computers, RTUs, IEDs (Intelligent Electronic Devices) and master stations.  DNP was originally created by GE Harris in 1990 (who was Westronic, Inc. at the time).  In 1993, the DNP3 specifications where released into the public domain, and ownership of the protocol was given over to the newly formed DNP Users Group. Since that time, the open protocol has gained worldwide acceptance.

You may also be familiar with the other popular protocol in the electrical industry, the IEC 60870-5 specifications which have many of the same features of DNP3 with the exception that it was created by the International Electrotechnical Commission (IEC).  DNP 3.0 and IEC 60870-5 share a common design and both grew from the some of the same ‘roots’.

Both protocols provide basically similar application functionality and were primarily designed for point-to-point or multi-drop serial link architectures, but can work over radio, LAN, etc.   Both protocols are used worldwide for electric power SCADA.  DNP is dominant in North America, Australia, South Africa. IEC is required by legislation in some European countries, and is also common in the Middle East. In most of Asia and South America both are used almost equally.
However DNP has gained wide acceptance in some non-electric power applications, where IEC is not used much beyond the electrical world.

    Both protocols offer features that are important to transmission of electrical data or control such as:   

  • Time synchronization and Time stamped events
  • Freeze/Clear Counters
  • Select before operate (a two stage control process for increased security)
  • Polled report by exception and Unsolicited Responses
    In particular DNP3 offers robust and efficient functionality such as the ability to:   

  • Request and respond with multiple data types in single messages
  • Segment messages into multiple frames for greater error detection and recovery
  • Only report data that has changed in response messages
  • Request data items periodically based on priority
  • Support time synchronization and a standard time format
  • Allow multiple masters and peer-to-peer operations

These optimization and control features are important to SCADA applications that have large numbers of devices in the field, all sharing the same, remote communication channels.  Although DNP and IEC 60870-5 are very common protocols for electrical hardware such as RTUs and IEDs, they are complicated software protocols to implement properly and are not commonly supported by more general software applications such as HMIs, historians or alarm management packages.  Of course, this is where OPC fits into the picture.

Even very feature rich protocols like these can be mapped into OPC specifications using OPC DA 2.0, 3.0, HDA and/or A&E specifications.  Since electrical transmission sites and substations are almost always telemetry type architectures, you will find DNP 3.0 and IEC 60870-5 OPC Servers that take this into account by offering highly configurable communication options, and redundant communication channel support.

Competition due to deregulation and increased legislation aimed at improving reliability and security are driving companies to have better access, history and/or tracking over their field level devices.  This means more higher level applications are needing data from DNP 3.0/IEC 60870-5 devices.  Good thing OPC is around to solve their problems

Does Open Mean Free?

Thursday, August 9th, 2007

When I was writing my posting on “Open Standards and Vendor Neutrality”, I figured someone would bring up the old argument on how can OPC be considered an Open specification, if you have to be a member to access the documentation?  Sharon Rosner had this to say:

“The article discusses ODF and Open Office XML as two competing standards, the specifications for both of which BTW are readily available for download.  When will the OPC foundation offer the OPC specifications for download by anybody?  “Can OPC UA really be considered an ‘open standard’ when it’s really open only for secret club members?   I find the behavior of the OPC foundation to be really baffling. You say one thing and you do another. But by your policies you shut out a large number of independent developers who are thirsty for more information on OPC. There are many OPC products out there which were developed by reverse-engineering rather than by implementing the standard. If O’Reilly were indeed to do a book on OPC, would you let them publish the specs? ”

It seems the argument boils down to the question “Does Open = Free?”.  Since I’m an engineer, my first logical thought is to find the accepted definition of ‘open standard’.   The ever useful Wikipedia has this to say on open standards.

An Open standard is a standard that is publicly available and has various rights to use associated with it.  The terms “open” and “standard” have a wide range of meanings associated with their usage. The term “open” is sometimes restricted to royalty-free technologies while the term “standard” is sometimes restricted to technologies approved by formalized committees that are open to participation by all interested parties and operate on a consensus basis.  Some definitions of the term “open standard” permit patent holders to impose “reasonable and non-discriminatory” royalty fees and other licensing terms on implementers and/or users of the standard.

If you read the entry in detail it turns out that according to ITU-T and EU definitions Open is not necessarily free, but if you are Danish or Bruce Peren it is.  Which amounts to a sticky wicket that doesn’t really answer things at all.  Of course finding a definition to stand behind is just semantics anyway. 

The real question is “Should the OPC specifications be available for free download by everyone?”.  The OPC Foundation DID offer the OPC specifications for download by anybody for almost ten years.  The OPC Board of Directors did not make the decision to limit access to members only until May 1, 2006.  It was not a decision they came to lightly, and knew that it would have an impact on OPC implementation.  The main influence was the message coming from the end users.  “We don’t what more OPC products, we want better OPC products.”   As Tom pointed out in one of his early blog postings, there were many client applications built on the OPC technology by non-members that did not measure up to the expected quality and had many interoperability problems.  

If the Foundation has to sacrifice some quantity in order to increase quality then so be it.  In the long run, a solid, reliable, truly interoperable standard will become the preferred and demanded choice.  However, allowing things to continue as they were would lead to increased frustration among the existing OPC user community and eventually result in decreasing trust and adoption of the OPC specifications.   Ultimately everyone would lose.   

In order to improve quality of implementation means the OPC Foundation needs to know who is doing the implementation, be in communication with them, and ensure they have access to and are using the appropriate development, validation and testing tools.  They have followed the lead of many other organizations in requiring membership to track who is developing the technology.  I think that is a key distinction.  You don’t have to be a member to make use of the technology, but you should be a member if you are developing OPC products that will be connected to OPC products from other vendors. (Even this is not necessarily true if you are developing using a third-party toolkit.).   The club is not secret.  Anyone can join and many have.

If the cost of membership is a barrier to entry for some developers, then what about the costs associated with testing, validation and compliance?  Will they be deemed too expensive to undertake as well?  What assurances do end users have that an OPC product developed completely outside of the OPC Foundation support and testing structures will be interoperable?  I’d concede there may be some cases where someone may wish to see in detail what OPC offers, before deciding whether or not to embark on a development cycle, such as open source projects, or initial R&D phase of a product.   Tom mentions plans to address these cases in another of his posts.  To my knowledge that plan is still true.    In any case, I’d argue that there are tutorials, free tools, training classes or other ways to get enough understanding of what is involved without seeing the specifications.

Will the current ‘pay to play’ model improve overall quality of OPC installations?  Time will tell.  Personally I believe it will, but of course I don’t know for sure.  What I do know is how it was before wasn’t working well enough.  (If I had all the answers, I be independently wealthy, living in a remote seaside cabin enjoying the sunshine and a tall Guinness.)

Wireless and a Familiar OPC Story

Thursday, August 2nd, 2007

The buzz in the automation blogsphere this week is undoubtedly on wireless (yet again).  Most of it is in regards to the ISA Wireless Summit, which Walt Boyes and Gary Mintchell had some good conversations on.  In reading Jim Cahill’s break down of John Berra’s speech, it struck me as a very familiar story.  The push for standardization, the adoption cycle, and potential for increased access to information and the side effects this may produce are all echo the history of OPC and where it is going with OPC UA.  The same can probably be said for many established standards, but here’s my OPC take on John’s speech.

On Opportunities:

 “But if what we do as a technology doesn’t transfer into allowing plants to run better, safer…it isn’t going to survive.”

OPC survives because it offers value by providing access to data that was difficult or impossible for higher applications to get at previously.  You can replace the word ‘Wireless’ with ‘OPC’ in John’s next statement, and create the perfect OPC quote.

“Wireless offers opportunities for better business and plant management, for better workforce productivity, for better plant and process information. It provides access to information that was out of reach or very expensive to access, so you can do things you couldn’t do before. The technology is proven and ready to deliver results today – with more capabilities coming.”

On Moving to Reality:

“Reliability and security are also critical to overcome. […]  We have not achieved 100%, but I don’t think, that we have to wait for the 100%. We all have products in the field that are meeting many of those objections, and perhaps even all of them. […]”

As someone once said “With great power comes great responsibility”.  The first step is accessing the data, the next is ensuring it gets to the right people at the right time.    When someone starts talking about wireless, the topics of security and reliability are sure to surface.  These are the same challenges OPC is dealing with, through innovative products, proper architecture and are key features of the OPC UA specification.   As with any technology that offers the power of increased access to information, OPC and wireless need to be implemented responsibly.

On Standards:

“Users want standards for wireless – and so do I. Users want confidence wireless equipment and networks will work together, regardless of supplier – now, and years from now. They don’t want to be locked into proprietary networks. Standards are good for suppliers, too. […]   Standards increase user willingness to buy. They give us confidence the approach we’re taking will be accepted in the marketplace.  But mostly, standards are good for our customers.”

Again if you replace ‘wireless’ with ‘OPC’ you hear echoes from the dark times of device connectivity before OPC came about.   Many of the points John makes on creating standards are right on the money in terms of OPC and the development of the OPC UA specifications.

• “Don’t invent the standard unless you have to, unless there is nothing that can serve.”
• “Don’t try to reinvent something that is already well proven and already exists. “

  • “Stay close to the end users.”
  • “Leverage the hard work that has already been done”
  • “…setting politics aside and leveraging proven technologies to deliver a solid, usable standard as quickly as possible. “

You have to remember that John’s whole speech was an excellent talk on wireless standards, but there were just so many parallels to OPC, that I couldn’t say it better myself.  So I didn’t try.

Love it or not, OPC has opened a world of opportunities for data access.  Wireless is going to up that number exponentially, and OPC UA even more.  So as the man said, let’s get on with it.