Unleash Your Beleaguered Data: Driving Insights from the Edge to the Cloud

Getty Sergey Nevins Edge Computing 2

Getty/Sergey Nevins

Let’s talk for a minute about data silos. Real world silos, of course, are those towers on farms that are used to store grain for future use or sale. They are tall buildings that usually contain only one type of raw material. The concept of a silo generally serves as a metaphor to describe large groups of metadata that are stored separately from other metadata.

Servers and hardware are often silos of data. Different devices store data, but not all of it is necessarily shared with other devices. Applications create and store data, but only some of them may…may be…shared if using a well written API (Application Programming Interface). Over time, organizations find themselves with a lot of data, but most of it is isolated and stored in separate metaphorical silos, never to be part of a larger whole.

How edge computing creates the perfect storm for data silos

When it comes to enterprise networks, especially from edge to cloud, data silos occur naturally. Each device on the edge produces data, but much of that data may remain on the device, or at least, the group of devices in that edge location. The same applies to cloud operations. Data is created and stored in many different cloud providers, and while they sometimes exchange data, most of it lives isolated from the rest of the organization.

also: How the edge-to-cloud concept is driving the next phase of digital transformation

But actionable insights and strategies come when all the data across the organization is available to the right users and systems. Let’s look at an example that might happen in A home-by-house fantasy home goods retailer we discussed earlier.

Home-by-Home sells wall-mounted lighting fixtures that use plastic brackets to attach to the wall. Usually a great seller. But every March and April the company gets an avalanche of revenue because the brackets break. Returnees are from all over the country, from Miami to Seattle. This is our first dataset, and it’s known to the stores themselves.

The brackets are built by a factory partner company. Normally, the plant operates in temperatures above 62 degrees Fahrenheit, but in January and February, the temperature around the plant drops to an average of 57 degrees. This is our second set of data, the temperature in the plant.

Neither dataset is connected to the other. But We also explored it in some depth some time agoSome plastic production processes start to fail below 59 degrees or so. Without the ability to correlate a factory’s data set with store returns stats, the company wouldn’t be able to tell that a slightly cooler factory was producing substandard arcs, which were failing all over the country.

But by capturing all the data and making the datasets available for analysis (and AI-based correlation and big data processing), insights become possible. In this case, because one house after another is made digital transformation As part of its DNA, the company has been able to correlate plant temperature with yield, and now customers who purchase these light fixtures experience far fewer failures.

Your data is everywhere, but is it actionable?

This is just one example of how data can be collected from the edge to the cloud. There are some major interconnected ideas here.

Your data is everywhere: Almost every computer or server Internet of Things device, telephone, factory system, branch office system, cash register, vehicle, software-as-a-service application, and network management system are constantly generating data. Some of them are deleted when new data is created. Some of them accumulate until storage devices become clogged due to overuse. Some of them are located in the cloud services for each login account you have.

Your data is isolated: Most of these systems do not talk to each other. In fact, data management often takes the form of discovering which data can be deleted to make way for more to be collected. While some systems have data exchange APIs, most are not used (and some are overused). When my dad complained about some local business, my dad liked to use the phrase “the left hand doesn’t know what the right hand is doing.” When data is isolated, the organization is just like that.

Ideas come when connecting multiple inputs: While it is possible to subject a single data set to comprehensive analysis and come up with insights, you are more likely to see trends when you can correlate data from one source with data from other sources. We have previously shown how factory floor temperature has a distant, but measurable, relationship to the volume of returns in stores across the country.

To do this, all of this data must be available across your organization: But these connections and observations are only possible when analysts (both human and AI) can access many data sources to see what stories they all tell.

Making data usable and turning it into intelligence

The challenge, then, is to make all that data usable, collect it, and then process it into actionable intelligence. To do this, there are four things to keep in mind.

The first is Travel. The data must have a mechanism to move from all these sophisticated devices, cloud services, servers, etc. to a place where it can be disposed of, or collage. Terms like “data lake” and “data store” describe this concept of data aggregation, even though the actual storage of the data may be quite scattered.

also: Edge-to-cloud powered digital transformation is seen in this scenario for a large retailer

These two issues, data storage and data movement both require considerations Safety And verdict. Moving data and unstable data must be protected from unauthorized access, while simultaneously making all of that data available to analysts and tools that can mine the data for opportunities. Similarly, data management can be problematic, because data that is generated in one geographic location may have government or tax issues if it is moved to a new location.

And finally, the fourth factor to consider is analysis. It should be stored in a way that is accessible for analysis, adequately updated, properly indexed, and carefully organized.

Nice introduction to data refresh

Humans are curious creatures. What we create in real life, we often reproduce in our digital worlds. Many of us have cluttered homes and workplaces because we haven’t found the perfect storage spot for each object. Unfortunately, the same is often true of how we manage data.

As we discussed earlier, we isolated a lot of it. But even when we pull all that data into a central data lake, we don’t have the best way to search, sort, and sift through it all. Data modernization is about updating how data is stored and retrieved to take advantage of recent developments such as big data, machine learningAnd Amnesty Internationaland even in in-memory databases.

Information technology buzzwords for data modernization and digital transformation go hand in hand. That’s because digital transformation can only happen if data storage and retrieval methodologies are the best (mostly the Higher) Regulatory IT priority. This is called a data-first strategy and can reap big rewards for your business.

See, here’s the thing. If your data is restricted and trapped, you cannot use it effectively. If you and your team are always trying to find the data you need, or never see it in the first place, innovation will be silenced. But edit that data, and it will open up new opportunities for you.

Not only that, poorly managed data can be a waste of time for your professional IT staff. Instead of working to move the organization forward through innovation, they spend time managing all these different systems, databases, and interfaces, and exploring all the different ways they can break it.

Updating your data not only means you can innovate, but it also means you can free up your time to think rather than react. This also buys you time to publish more apps and features that can break new ground for your business.

Find the actionable value and insights hidden in your data

Updating the data and adopting a data-first strategy can be challenging. techniques like cloud services And artificial intelligence can help. Cloud services can help by providing an on-demand and on-demand infrastructure that can grow as more and more data is harvested. AI can help by providing tools that can examine all that data and organize it coherently, so that professionals and business managers can take action.

But it’s still a huge request for most IT teams. Typically, an IT department isn’t set up to isolate all that data. It happens naturally as more and more systems are installed and more and more to-do items are placed on people’s lists.

This is where management and infrastructure services like HPE GreenLake and its competitors can help. GreenLake offers a pay-per-use model, so you don’t have to use up your “guest” capacity ahead of time. With multi-application dashboards, multiple services, and a wide range of professional support, HPE GreenLake can help you turn your data ubiquitous into a data-primary strategy.

Leave a Comment