Electronics Clinical Quality Measures and Data Integrity

Over the years, as data has continued to increase in quantity in the healthcare industry and ways to access this data, there has been an increase in the need to maintain the quality of this data. This has led to the discussion of Electronics Clinical Quality Measures and how it can be implemented. One of the government agencies responsible for this regulation of quality is the Centers for Medicare and Medicaid Services (CMS). This organization has implemented programs for healthcare organizations and a decision to take part in these programs meant the healthcare providers had to implement time- and labor-intensive processes to collect, organize, and submit data per the specific requirements of each program. However, the requirement can be quite a burden and weigh heavily on healthcare organizations and healthcare practitioners as they clamor to fulfill every requirement. As the quality reporting programs grew in number, so too did the calls coming from the healthcare community for both relief from the reporting burdens and better program alignment. With the introduction of the Electronic Health Record Incentive Program (Meaningful Use) in 2011.

Understanding Data and Maintaining its Quality

When it comes to labeling data for storage and easy access there are two types of labels that can be used. Besides these two there are numerous other ways data can be labeled of course but we can discuss these two. Variable labels and Value labels.


Variable Labels: Variable labels are composed of a few words that describe what a variable represents. If the variable labels are properly formatted when being stored, they will show output tables and graphs instead of variable names. This makes it easier to interpret the data.

Value Labels: Value are labels used for coded variables in a dataset. For example, gender can be coded as 0 for males and 1 for females. Or zero for test and 1 for no

Keeping Track of Data Quality in Storage

When data is being stored there is a need to maintain is quality while its in storage so it’s not compromised. Below we discuss some ways this can be done

 

Use file directory structures to keep relevant files together

 

File directories help keep data structured and layered. One way to do it is to organize files and their corresponding files in directories organized by outcomes. When you organize data by outcome variables you can then keep unique and raw data in its clean and original form. This way, the final data set can be easily accessible and analysis of output can be made. Organizing directories by outcome might not be a personal choice and that’s ok because there are many other structures that can be picked that have each set of analyses with corresponding data and output files.

Split large data sets into smaller relevant ones

When you decide to split data analysis there can be a lot of outcomes which come in different scales. Each one could have many variables that can be broken down into smaller sets instead of having one enormous data set that ends up being unmanageable.  Each set would have unique variables and conditions that fit best with the data it contains. This strategy can be particularly helpful when you are running secondary data analysis on a large dataset. When splitting large data sets, identify which variables are common to all analyses and which are unique to a single model.

 

Data Manipulation Using Syntax

Data types sometimes take up forms and parameters which are more specific and other times, they don’t. Some data types are written in unquoted upper-case words and some are not. Each known data type defines how many parameters it accepts, what values those parameters take, and the order in which they must be given. Some of the abstract types require parameters, and most types have some optional parameters available.

When making changes to data, it is better to document these changes as you go along as opposed to leaving the documentation to later. It might take more time but it will save you a lot of time in the long run and not to mention it will help keep the integrity of the data intact. Leaving documentation for later will make it harder to find your mistakes and might lead to an entire overhaul of the dataset. Also, at each sitting, you have to save the data to keep changes. You don’t feel comfortable overwriting the data, so instead, you create a new version. Do this each time you clean data and you end up with dozens of versions of the same data. A few strategic versions can make sense if each is used for specific analyses. But if you have too many, it gets incredibly confusing which version of each variable is where.

Data Quality and Digital Change and Innovation

Usually, organizations wanting to raise their competitiveness in the digital field often face a variety of functional challenges. These businesses need to answer questions in three areas:

Strategy: How can companies apply technologies that expand their horizons in the use of their data and technological advances that help provide the best products, services, and business models?

Operations and Processes: How can companies apply digital technologies to drive innovation, leveraging new tools, platforms, and processes in order to turn insights into new products and services.

Organization: How can companies transform themselves into digitally capable organizations and cultures that can bring digital innovations to market and make them work? (See the companion article “Organizing for Digital Innovation.”)

 

Leave a Reply

Your email address will not be published. Required fields are marked *