MatrikonOPC OPC Exchange


Secure Data Transfer – moving beyond Connectivity

$id = 400; Posted on October 3rd, 2011 by Mustafa Al-mosawi

Sharing data between devices and applications used to be a road fraught with dead-ends caused by incompatible and proprietary vendor protocols- leaving systems running in data silos. Today, the contrast is striking: instead of’ back-roads’,  multi-lane data ‘freeways’ carry unprecedented amounts of control data throughout the enterprise – thanks in a large part to OPC, the de facto standard for vendor neutral data sharing.  As a result, data sharing faces a new challenge: security.

From eavesdropping and tampering with data, to disruptions of the underlying communication systems, threats are everywhere – The new paradigm is ‘connect safely.  Most OPC architectures are still based on the first generation of OPC specifications, commonly referred to as ‘OPC Classic’.  A  key concern is to keep OPC connections up behind firewalls, without leaving them and their underlying systems vulnerable through  DCOM ports which OPC Classic relies on, as these DCOM ports can be vulnerable to malicious software and hackers.

Fortunately, the answer is simple. Using a single software solution that is as simple as it is ingenious – your OPC Classic clients and servers can communicate across networks, firewalls, and domains in a safe and reliable manner regardless of the vendor(s) they are from. How safe? Achilles Certified safe – meaning the solution has passed rigorous Achilles third party certification, one of the most recognized security certifications on the market.  Using a solution like the Achilles OPC Tunneller from MatrioknOPC keeps your systems secure, your IT department happy, and your data flowing….safely.


OPC, Better than ever

$id = 396; Posted on September 21st, 2011 by Mustafa Al-mosawi

Many companies tout new releases as being ‘better than ever’ and that can smack of hyperbole, when it’s only a few minor tweaks.  Sometimes, a new release comes along that really makes a big difference for everyone using our software.  That’s why it is genuinely exciting to launch our first major server update featuring our new “StreamFlow” UI.

StreamFlow’s philosophy brings modern design techniques such as discoverability, visible navigation, reduction, consistency, closure and direction action to simplify the experience of configuring and maintaining an OPC Server.

Here are some new highlights:

  • Easier To Get Started – The new UI starts you off right in the action – connecting to your devices!  If the OPC Server in question is capable of Auto-Configuration, then there’s a giant Auto Configuration button, otherwise you’re directed to click on the ‘New Node’ button, and it provides you with all the available options.
  • Explorability – Nearly all the functions of the OPC Server are accesible through the left hand navigation panel or the breadcrumb.
  • Consistency – Major functions are always accessible from the button bar, and buttons are larger and friendlier. Buttons are sensitive to the context you’re in, so only relevant functions show up.
  • Simplicity – We’ve simplified two of the more complicated UI sections, Aliases and Redundancy, so it’s easier to get the most out of our OPC Servers.

Early feedback has been encouraging, but we can always do better with your feedback.  Try out our OPC Server for Allen Bradley, and let us know!

 


PCs, PLCs and OPC: A History

$id = 388; Posted on August 29th, 2011 by Mustafa Al-mosawi

What a difference 30 years makes. 30 years ago, nearly to the day, IBM introduced the 5150. Seemingly overnight, it created the Microsoft/Intel architecture combination that continues to dominate today. As the first personal computer to achieve critical mass, the 5150 was a major milestone in a computing revolution.  A revolution that continues to accelerate, with processor performance doubling every 3 years.  It’s not limited to processors either: digital storage and optical network bandwidth performance are experiencing the same acceleration. The PC transformed the way we work, live and play.

The explosion of general purpose computing has been felt everywhere.  No less than in the world of automation, where a slightly different beast rules: the PLC. Introduced roughly a decade earlier than the PC, the Modular Digital Controller (Modicon!) PLC transformed the way General Motors built its cars. Today, programmable automation is helping drive our buses, intelligently manage our buildings, build the goods we buy, and generate our power.

Many think that the real reason the key to the 5150′s genius was its use of off-the shelf parts. This choice led to interoperable clones. Interoperability and the rise of clones, not the IBM brand, was what cemented the PCs success.

In the world of automation, true interoperabilty came about 15 years ago with the advent of OPC. Software that ran on, you guessed it, a direct decendant of the IBM 5150 PC. The initial specification had its limitations. OPC compatible software was limited, but interoperability was here to stay.

Soon, new advanced standards were released every few years.  Each solved a new set of problems The latest standard, OPC UA, is the most sophisticated and comprehensive open protocol standard yet.

That’s why at MatrikonOPC, our Universal Connectivity Server offers OPC UA as well as classic DA, A&E, combined with the best OPC Security in the business. That way, you can meet the demands of today’s applications, and be ready to take advantage of tomorrow.


Slay the History Integration Beast: an OPC Story

$id = 382; Posted on July 28th, 2011 by Mustafa Al-mosawi

The other day I was on a call with an engineer that was experiencing serious issues getting data from his control network historian (whose vendor *ahem* shall remain nameless) into their centralized corporate historian (whose vendor *ahem* shall also remain nameless). The problem: the data synchronization would drop out, causing them pain with their regulator. If they were found with gaps in their history, they could face serious fines.

When it comes to integrating historians from multiple vendors, there be dragons. Thousands of screens and reports built on proprietary tools, an existing user base resistant to change, and all the added expense of migration. What a headache.

High pressure sales from both sides wanted to standardize on one solution, and they were promising the moon, at the cost of hundreds of thousands of dollars in software alone. But were the efficiencies gained really worth the pain? Each historian on its own worked fine. It was the synchronization tools that were inadequate, and neither vendor had an interest in making it easy. There had to be an easier way out.

Well there was. Fortunately both of his historians supported OPC-HDA. So we were able to recommend he try OPC HistoryLink, the world’s first true vendor-neutral Historian Integration tool. Using OPC-HDA, he could synchronize his process historians together, while protecting the integrity of his data during communication outages. Now he could safely consolidate data for corporate reporting, while leaving control network users access and control over their own data. Both corporate and control personnel could have their cake and eat it too.

Best of all, OPC HistoryLink protected his existing investment in process history, and saved him the expense of rip-and-replace. Sometimes, even in the world of manufacturing and engineering , there are happily ever afters.


Augmented Reality with OPC UA

$id = 371; Posted on July 12th, 2011 by Mustafa Al-mosawi

Picking a restaurant isn’t what it used to be: the other day I was out with friends, when I came over all peckish. It was a part of town we hadn’t been to, so I raised my smartphone in front of me. It displayed the streetscape I was looking at, with restaurant ratings and arrows superimposed. I tapped a highly rated Thai place, read a review, then booked a table for 4.

Welcome to the world of augmented reality. Reality+. There are apps that help you find your car, point out bunkers on a golf range, help you find rental accomodation, and translate signage, and that’s just scratching the surface.

Process industries have long lived with a form of augmented reality – sensors connected to transmitters to networks and displays have provided information and analytics on process and equipment for years, and OPC has played a big role in making that easier to deploy and maintain across an increasingly more complicated landscape. Until not too long ago, this was limited to control rooms, and recently moved out to the personal desktop, and occasional web-phone. But isn’t quite the same.

Imagine walking up to a piece of equipment, and instantly have everything about it at your finger tips, just by standing near it: past service history, today’s trend, current alarm status, outstanding work tickets, the location of the maintenance tech etc.

There are 4 key characteristics to augmented reality:

  • Information: All the data for the equipment.
  • Location: In consumer space, this is provided by GPS. In a plant environment, this may be through RFID or through information gleaned off the wireless infrastructure.
  • Processing Power: Carry it with you, or throw it onto the cloud, always-on processing power is a must.
  • Security: For consumers, this isn’t a prime concern as most of the information is already public – but for sensitive industries it’s especially important.

One key enabler for augmented reality in the industrial space and beyond is OPC UA. With its integrated security model, support for multiple data types, and platform agnostic design, it’s almost tailored for augmented reality. OPC UA servers like our OPC Universal Connectivity server mean that application developers can rest assured that they’ll be able to connect to any equipment whether they’re building for iOS, Android, QNX or good ol’ Windows. End users can comfortably integrate their existing OPC classic servers into any augmented reality platform.

If you’ve found any cool apps or have any comments you’d like to share, we’d be happy to hear from you.


The Importance of being Critical: How OPC deals with Critical Data Loss (Part 2)

$id = 359; Posted on July 7th, 2011 by Mustafa Al-mosawi

Previously on OPC Exchange: I discussed critical data, and how to use a risk matrix to assess criticality. Critical data requires that it be ensured against loss. But what form should action take, and how much to spend?

Let’s start with the risk matrix again.

ProbabilityImpact VLow Low High VHigh
Certain Important Important Critical Critical
Likely Moderate Important Important Critical
Possible Low Moderate Important Critical
Unlikely Low Low Moderate Critical
Rare Low Low Moderate Important

The real downside is the touchy-feely, qualitative information. Business decisions – and decision makers – demand hard numbers.

You can get a good estimate if you have reasonable information on the likelihood that data will be unavailable (as a percentage probability), and the dollar impact of lost data.

Like so:

ProbabilityImpact $100 $1k $10k $100k
10% Important Important Critical Critical
1% Moderate Important Important Critical
.1% Low Moderate Important Critical
.01% Low Low Moderate Critical
.001% Low Low Moderate Important

Then you can just multiply the probability by the impact to get an approximate number.

For example, if I had data coming across a wireless radio, at say, 95% reliability, and I were to lose data that is required by a regulator or used for billing, with an impact of $200,000, $10K is money well spent. If the cost goes up, or the communication reliability goes down, then I can justify spending even more. If I have multiple locations, the probability of data loss is added. 5 sites with a 95% reliability rating are the same as having one site that is 75% reliable. In this scenario, I could spend $50k on the solution and come out ahead.

Once you’ve determined the critical data, there are quite a few options available: upgrading the telecommunication backbone, modifying the equipment to hold data on-board, or adding onsite store-and-forward. One of the most cost effective ways of insuring against losing critical data is our OPC Buffer with History Link. Built on the OPC-DA and HDA open standards, it will connect any end equipment to any corporate data store, ensuring your critical data gets to its final destination.


The Importance of being Critical: How OPC deals with Critical Data Loss (Part 1)

$id = 353; Posted on July 5th, 2011 by Mustafa Al-mosawi

With data, as in life, it’s easy to lose sight of the critical in the hurly-burly of the important. Important data is like a roof over your head or food on the table. Most people wouldn’t want to imagine life without it, but under extreme circumstances, we can do without for short periods of time. Critical data is like the air we breathe, lose it for even a short while, and the impact is disastrous.

Analogies aside, it’s clear we should take care of our critical data. But how do we differentiate critical data from good ol’ important data?

If you don’t already have a method you use, consider a risk matrix. It’s simple and easy to use. We use a risk matrix for releasing our OPC Software and for Health and Safety.
A risk matrix compares the likelihood of an event with its impact to see if action needs to be taken. Here’s an example:

ProbabilityImpact VLow Low High VHigh
Certain Important Important Critical Critical
Likely Moderate Important Important Critical
Possible Low Moderate Important Critical
Unlikely Low Low Moderate Critical
Rare Low Low Moderate Important

Clearly anything falling in the critical section needs to be looked at, and action needs to be taken to reduce the likelihood of the event or impact.

Okay. We need to take action. Good start. What form should that action take? Do we tackle the likelihood or impact first? Before we get too ahead of ourselves: how much to spend?

That’s a subject for another post. In the meantime, I suggest taking a peak at a hub-and-spoke architecture.

To be continued …


Free your mind with OPC HDA … and the rest will follow

$id = 341; Posted on June 23rd, 2011 by Mustafa Al-mosawi

There are few things I enjoy more than seeing the light go on in someone’s mind, and few better places to see it than at an OPC Training Workshop. A student had asked a very good question, and a few days later, when visiting a customer site, the same question came up again. So – here I am, blogging about it.

The question asked both times was this: how do I access the data files/tables on your HDA Historian?

Some of you are probably chuckling under your breath. But these are very smart people. Experts with a wide variety of technology. So why the disconnect? OPC-HDA wasn’t something they were familiar with. So if you don’t think that’s a funny question, here are two answers for you: short and long.

Short answer: You don’t. You don’t _need_ to. That’s the point. Any HDA tool will get you all the access to the archived data you need.

Long answer: You don’t. You don’t _need_ to. The whole point of the OPC HDA standard is to remove the need to deeply understand how the data is archived, accessed or organized. I can store my data in a single file, or fifty-thousand. I can store them as csv, Excel, or DAT. I can store them with an advanced configuration database with advanced lookups, or in one giant long, insanely over-indexed table. I can store the data how ever I want. HDA makes no restriction about which mechanism I should choose. The OPC HDA Server’s job is to make that invisible to you, whichever archive you choose.

That’s the theory anyway: in reality, not all HDA Servers or archives are created equal.

Archive structures are defined by good engineering, design constraints, and project requirements. Choosing poorly can have implications related to security, reliability, auditability and availability of the data. Ultimately, whichever archive you or the vendor choose, the HDA Server needs to understand the database schema, or the file structure. The HDA Server is responsible for figuring out which files to open, how to structure the query, and for packaging them up so any OPC HDA Client can understand the answer. The HDA Server is almost always separate from the archive – just as the OPC DA server is almost always separate from the device.

Like DA, HDA frees you to pick and choose whatever tool to access the data from your historian. That’s how OPC Excel Reporter and OPC Easy Trender work with any historian on the market, and how OPC Desktop Historian works with any HDA tool.

So instead of data-base schemas and file structures, gain knowledge about your processes and equipment performance, from a wider variety of data sources, with a greater selection of tools. That’s the power of OPC.


Clouds: Just a Bunch Of Fluff?

$id = 336; Posted on June 13th, 2011 by Mustafa Al-mosawi

Google the word ‘cloud’ and the first result is Wikipedia’s Cloud Computing page.  The twitterati are breathless in anticipation for Apple Inc.’s latest iteration of their web-based services ‘iCloud’.  Microsoft, IBM, Oracle, vmWare, many major IT companies are offering cloud-based versions of data centers, computing power, virtualization, Office suite, programming platforms and more.

This wave of cloud-computing is the next iteration of a series of computing paradigm changes stretching as far back as mainframe computing.  In spite of its moist and fluffy name, in reality, the cloud is based on the same hard IT assets you use every day:  Software and Services, Processing Power, Storage and Networking. Any time you drag and drop a file seamlessly across a WAN, it’s the cloud baby! That doesn’t conjure images of a Brave new world does it?

“Why should I care?” you ask.  Cloud computing will lower the cost of working with with your process and equipment data, for equal quality of service.

Here’s a concrete example: Process Historization.  If ERPs can exist in the cloud, why not process historians?  The technology exists: all that is needed is a universally available, secure and reliable method to send equipment data to the cloud-based historian.  Then web tools and open APIs for getting data out. This is a technical challenge ideally suited to OPC UA, and our Universal Connectivity Server.

The decision to go to the cloud ultimately remains one of corporate policy: how comfortable are companies (and their IT and Engineering staff) with the idea of sending their propietary data to another company across the internet?
Whether your company decides to enter the cloud whole-heartedly, tentatively or not all, MatrikonOPC and OPC UA technology are here to help.


A Tale of Two Standards – OPC and ODBC

$id = 328; Posted on June 7th, 2011 by Mustafa Al-mosawi

Many data consumers have different reasons for needing process and equipment data in a relational database.  Relational Databases are the bedrock of ERPs, CMMS, Data Warehouses, LIMS and many other analytics and information systems. The tools that are built on those systems assume they are connecting to relational databases.  Just as OPC delivers open connectivity to process and equipment data, databases benefit from open standards, in this case ODBC (Open Database Connectivity).  Despite the fact that these two standards have very different histories, and solve different problems, they play very well together.  Here are three different scenarios:

What if, as in the case of a recent phone call I had with a maintenance manager, you’d like to automate putting runtime data on your pumps into your maintenance management system? The great news is that the OPC standard makes it easy. By standardizing how we connect to controllers and equipment, OPC makes it easy for vendors to fill in the gap of getting OPC into ODBC databases. With, for example the OPC Client for ODBC.

What if I have the opposite problem? I have data inside a database, control settings for a batch of new dies for example, and I’d like to push it to my controller or access it with my HMI? OPC and ODBC to the rescue again, this time in the form of an OPC Server for Relational Databases, transforming any database into a genuine, honest-to-goodness OPC Server.

Okay. Third scenario. Third?! Yes. What if, I have a tool that I really love, which only understand ODBC, but my data is already being stored in a process historian? Well OPC and ODBC do another dance in the form of an ODBC Server for OPC, letting you write SQL-style queries to get data from any OPC-compatible process historian. So keep your tools, and your historian.

Whichever way you’d like to work OPC and ODBC, there’s something there for you.