Posted on

Is implementing big data in healthcare worth the privacy risk?

image-frames2

 

There is a strong and convincing case to fully implement big data analysis into the healthcare profession. Departments that are able to communicate with each other are often better able to help their patients. In short, more interconnectivity often equates to a more efficient healthcare system. It won’t be long before various different companies have mapped far more genomes that they currently have. There will be a huge database of patient’s clinical and genetic data. This should help doctors diagnose and treat diseases, such as cystic fibrosis, and heart conditions.

But often, the privacy issue is one that is overlooked. In the quest to streamline the whole process, the advent of big data analysis has thrown up some interesting questions about just how private our medical information really is. The worry is that what was once sacrosanct private medical information becomes a commodity traded freely between the healthcare industry and large corporations. The NHS has been reprimanded for illegally handing patient information to Google, and with massive fines available to any company that breaches regulations, as we move forward into the age of big data analysis, privacy issues will be at the forefront of general consciousness.

Companies that manufacture drugs in the UK have been fervent advocates of the possibilities that big data provides. They claim that through proper analysis, the mountains of information available to the healthcare industry and not only improve overall patient health, but save the NHS money in the process.

The possibilities if the healthcare industry successfully integrates big data into their business model are obvious. But as are the potential risks. With stringent regulation, the lives of could be made a lot better, their treatment could be more effective, and save money. But without such regulations, the risk that certain private information could be distributed to data banks like Google is going to provide a stumbling block.

 

 

Posted on

Large and small-scale uses of data collection in cities

city-dataData collection on a large scale in developing, and its use in cities is increasing drastically. An American company is now providing businesses and cities with detailed, real-time data that aims to help cities get deeper insight into the patterns of stationary and moving objects, for both indoor and outdoor environments.

The way this works is through the utilisation of a supercomputer that runs advanced neural networks, with an extensive range of wireless radios and a HD stereo camera. Through this, near-instantaneous scene analysis can display critical event notifications in real-time.

This company can, however, ensure customer privacy, as video data streaming never happens outside the sensor itself – the video is analysed on-site and immediately removed from the embedded system. After this, the scene can be described but can never be reconstructed, as to guarantee the safety of the privacy of the public.

This data collection can perform a variety of tasks, as large scale as estimating traffic flow and patterns, and performing detailed and accurate pedestrian and object counts. The technology can monitor the precise trajectory of objects either moving or stationary, whatever speed. This can be used as self-contained analysis, such as in shops, or combined to create a citywide field of view.

In terms of small scale use, this product can be invaluable in shops, monitoring how long a customer dwells on an object, and whether they proceed to buy the object or not. It also monitors how many people come and go from the shop, and whether they enter after spotting something of interest in the window.

Other small scale uses of this are at car parks, where the number and location of parking spaces can be monitored and displayed, perfect for optimising parking utilisation. This type of monitoring, along with monitoring the number of people waiting at bus stops, can be a step on the road to creating true smart city application.

Posted on

Big Data could help save lives

medical-dataBig data is viewed by most of the public with immense caution and fear, conjuring images of Big Brother watching your every move in an Orwell-esque nightmare. We are well versed in films, books and tv shows which display thought-police controlling us and robots enforcing the law of a tyrannical government. However, in reality, Big Data saves lives, and is making a better world for the future. The only problem is that restrictions in the availability of data, in particular medical research, are really delaying the development of Big Data.

One of the biggest problems here is that data restrictions is in the fight against cancer; researchers will often pinpoint lack of data availability as one of the biggest challenges faced in their research.

The idea is that this data could be brought together and examined so that it is more than the sum of its parts, but this data is often in ‘data silos’, and there is no communication between these groups. Furthermore, it is common to find the genetic make-up of cancer cells from a patient in one silo, and a completely separate and difficult to navigate database containing the patient’s information, symptoms, and response to treatment. These two pieces of information only have value when studied side by side, where there is the possibility they will reveal previously unknown causes of cancer, and help towards the development of more effective treatments.

Furthermore, this data availability problem is also harmful in other areas of medical research. In terms of rare diseases, it seems as though it would be a lot better for doctors to be able to easily access the research of others, and share their research. This could lead to a superior understanding of the disease, and would lead to an international body of research that would be helping researchers and patients from other sides of the world. Through responsible and effective sharing of data over, in effect, a ‘genomic internet’, data collaboration can help save lives.

Patients are also playing a role in the development of data sharing. For example, ROS1 is a gene that, when damaged, can give rise to cancer. Spurred on by a lack of progress with this disease, over 130 patients from 11 different countries came together and have approached a foundation to develop a clinical trial for this disease. It is likely that we will see a lot more patient-driven approaches in coming years, which promises to be beneficial for rare disease and cancer research.

There has recently been a call for a change in policy, which would enable data sharing that will eventually increase survival rates and improve the quality of life of cancer patients. Patient involvement is at the heart of this work, giving the public a vital role to play in making this a reality. Only with the support of the public can Big Data be used for such a good cause worldwide.

Posted on

Are the police force infringing on the privacy of the public?

data acquisition

Data acquisition is fast becoming a common and accepted part of large companies; collecting data and information in order to acquire knowledge about users, and therefore determine their interests, beliefs and often location. This can cause difficulties with privacy, and if there is a breach it can cause companies a lot of trouble. In recent years, the bodycam industry has been growing steadily, with particular usage increasing in the police force.

Police forces are equipping more and more officers with bodycams, to record events and permit more data acquisition. Furthermore, the latest AI technology is being developed and installed to make all the videos captured by the bodycams searchable for data acquisition.

Specifically, a new branch of AI called deep learning is being implemented by the police, and promises to take the internet by storm. Using sophisticated data acquisition technology, it can save time sifting through mountains of often useless information and focus on things that are actively important, pertinent and relevant to police investigations.

This increase in the use of bodycams is supposed to increase police accountability, but there are issues of privacy to consider. How do we stop them becoming nothing more than mobile CCTV cameras? The data acquired in this manner will have to be regulated in some way so as to avoid any privacy issues from arising. Exactly how police forces and other corporations will incorporate AI systems such as these into their everyday strategy remains to be seen, it seems that, in terms of data acquisition, these systems are very much part of the future.

Posted on

Should big data be regarded is the same way as a commodity or a national asset?

The value of datathe big data industry is increasing exponentially, and is expected to reach $100 billion by 2025. As a consequence, trillions of dollars of value will be added to other industries. The problem in his new age of digital economy is that each country has its own rules, in terms of data regulations. The current global system hasn’t adapted.

There are three aspects that have been identified as potentially problematic. Number one is having a right to privacy, number two is the absence of a legal framework suitable for the processing of big data, and the third is lack of governmental controls. Current infrastructure, as we have mentioned, is insufficient for this new digital age. A new digital platform provider may be required to keep up.

The solution is a common marketplace of shared, cross-industry digital platforms. And new training methods will certainly be needed in order to allow employees to properly and effectively deal with the inevitable challenges thrown up by managing big data.

The key to all of this will be investment in people, training and technology, so that the new digital economy can be managed properly. As the big data industry evolves, these issues will require solving, and new regulations will come into play.

Posted on

Google is using data backlogs to track everything you’ve ever done

Evdata-storageery time we interact with any online site, data about us is stored. The more we interact with a website, the more information about us is retained. Google, one of the websites we use the most, has begun to dredge through the vast quantities of data about us it stores, in order to match credit card histories to things like browser, location and advertising histories, amongst others.

Despite the mingling between data sets from different corporations, Google claim that you private data remains anonymous, but for many people, the mere idea that Google and Facebook combine their large reserves of data to effectively profile their users, may be an unwelcome shock for their many users. And some companies even mix offline data with online, for example Facebook, who signed deals with companies like Experian to do just that.

After accumulating this data, Google can create a super-profile of its users. This is helpful because it allows Google to dictate terms to the many advertisers who use the site. Knowing the extensive viewing and purchasing histories of its users, they have an idea about who might be interested in buying a certain merchants product before the merchants themselves do.

Key to this is having easy access to location data. Using Android, Google can track your movements, which is what gives it such an advantage when it comes to identifying potential customers. Location data allows Google to insert itself into every transaction.

Posted on

Why is data privacy important?

It is vPrivacyital for big data analysists to be transparent and open about the data they collect and how they use it, to keep the concern of the users at bay.
Unfortunately, the usage of data is not addressed as the important privacy issue that it is. Data breach is the most common, and arguably most damaging, privacy mistake your company can make. A data breach can reach the headlines and cause tremendous damage and embarrassment to your company, along with other topics such as discriminatory algorithms and illegal bias, inaccurate information due to relying on fake news, and identity reverse engineering, which basically consists of undoing anonymization.

Big data analytics have the power to gain a huge amount of information, which, if breached, can cause people huge problems. For example, your bank details are probably sitting in many databases, which, if accessed by someone else, could be detrimental.

Next, we come to the way information leads to knowledge, and unknown companies having knowledge about their customers can be deemed extremely unnerving. Information is where companies use data collected to understand user’s behaviours. From this, knowledge can be gained, where you connect the dots between different areas of a user’s life, such as their personal interests, political and religious views and shopping habits.

Over time, information can be gathered and therefore knowledge about a person leads to wisdom: an extremely personal profile of a user, cultivated over many years. People are often unaware of this and would be very uncomfortable if they were aware that someone knows so much about them. This is arguably the biggest problem faced by large data analytic companies.

In order to maintain a positive relationship with users, it is vital to be transparent and upfront about what and who you analyse. You should let your users know what your analytic capabilities are, generally what you know about them and why. If you find that you can’t find adequate reasoning for the information you have, you should probably reconsider it in order to avoid a scandal.
However, despite the importance of transparency, it is also important not to give away your strategic secrets – you are a competitive business, and if you give away too much information, your value disappears. Therefore, you must be transparent, but keep your vital strategic information to yourself. Try to explain what you do, rather than how you do it.

Primarily, you must let people know what you know, and what you’re capable of doing with this information. This won’t dissipate the privacy issue, but over time, transparency will build a relationship with your users and this trust is what will stop you becoming a scandal.

Posted on

Socialising data increase both efficiency and productivity

dataWhilst self-service data analytics has many benefits, there are also a number of problems, most notably the lack of reusability of datasets. The cost of low quality data is staggering, around $3.1 trillion according to IBM, and according to their studies, analysts spend around half their time finding and correcting bad data. This is a waste of time and opportunity, and stops big data form being as useful as it could potentially be.

The problem lies in the fact that there are too many one-off projects in terms of data acquisition, it takes them too long to receive and clean good data, and it isn’t saved. In other words, no one knows what anyone else has done. This means that other organisations waste time trying to recreate and replicate previously available datasets, rather than being able to delve into an already existing goldmine of data.

The solution is to approach data in a way that is more akin to social media. By socialising data acquisition, and integrating traditional approaches to self-service data with processes already common to social media platforms, we end up with high quality, reusable data sets. This methods provides operational repeatability, and means that the overall data acquisition process becomes quicker, easier and more efficient.

By being more collaborative, we ensure that good quality, trustworthy data is easily available to all. Bad data needs to be filtered for the useful stuff, and this is yet another time consuming process. Having a reservoir of independently reviewed, useful datasets solves this problem. For data analysts, this inevitably leads to an increase in productivity, as they no longer have to spend inordinate amounts of time trying to recreate old datasets. A more collaborative culture at a cross organisational level will contribute to better business results

Posted on

Many sectors are falling behind because of the advent of big data, but housing isn’t one of them

big-dataIn terms of data acquisition, technology is advancing at a rapid rate. So fast, in fact, that many industries are getting left behind. The housing sector, however, is managing to stay ahead of the curve by incorporating big data into their business plans. Some criticise the advent of technology for pushing traditional methods aside, but the general consensus is that the positives outweigh the negatives. Big data can provide agents with the information which allows them to tailor a housing solution to each individual client’s needs.

Appraisals are the first port of call when it comes to using big data. Traditionally a complex process, the use of big data can simplify and streamline the whole affair. Analysis of the data can reveal the overall worth of the neighbourhood properties, which can be used, in conjunction with other factors, to determine the value of the property in question. This can also be used going forward. Having an accurate value in mind can also help tell the housing business whether potential projects are worth the investment or not. The acquisition of big data can also be implemented by insurance companies, to analyse the local area and make a more accurate determination about what cover people might need.

According to some studies, almost half of all houses sold for over $5 million are bought through shell companies, and there is a worry that these companies use the property for money laundering. Using big data to analyse patterns and monitor suspicious activity allows business owners to haver better control of their organisation, and fewer worries about the potentially criminal intentions of any interested parties in an expensive property.

If a property has been on the market for a long time, and there is no obvious reason why, such as a bad review, then businesses can use big data to engage in targeted marketing. By studying housing trends in certain demographics, and combining this information with data about the property in question, you can tailor your search criteria so that you only target the specific kind of person that is most likely to buy the property. This means that you end up with fewer properties lingering on the market for extended periods of time.

Posted on

Keeping bad data can yield some surprisingly helpful information.

dataCleaning up bad data is one of the less romantic aspects of data science but it is an unfortunate necessity. But in our haste to make our data as clean as possible, might we be overdoing it a little bit.

Obviously we all want our data to be free from mistakes, error strewn information is of no use to anyone. But when cleaning it up, often, these bad inputs and outliers get discarded without a second thought, when actually they can be useful. Isn’t it better to understand where an error originates from, so as to prevent it happening again, than simply throwing the bad result away.

When you get a bad reading, there are a myriad of different reasons to explain it. From faulty equipment or inexperience operators to localised anomalies, data can be affected negatively by all sorts of factors.

Now when these results come in, and it is immediately apparent that the data is faulty, it is common practice to remove it so as to prevent it affecting the other results. But some data analysis companies have encouraged their employees to treated bad data as an outlier, and ignore it when collecting the results. Crucially, it isn’t deleted immediately. This means that these outliers can be analysed to determine what the problem at the core of the reading is. So next time you have to cleanse your big data, keep hold of the bad results and use them to try to prevent more errors in the future.