Author unknown, originally published October 26th 2013
There has been a lot of talk lately about Big Data and I was curious why companies like Google, Facebook, Twitter and Yahoo are investing a large amount of resources into its development. Big Data is essentially the ability to process large and complex data sets. Right now the maximum size of a data set, that can be processed, is measured in exabytes which is one with eighteen zeros behind it. A gigabyte, which most of us are familiar with, is one with nine zeros behind it. This seems like a very large amount of data but if you look at the exponential growth of information you realize that eventually this will not be enough. The amount of computing power that is required to process even this amount of data is enormous and as of 2012 the world created 2.5 exabytes of data every single day.
Another way to look at Big Data is comparing the processing power of the fastest computers to the abilities of the human brain. Research was recently done which compared the human brain to a supercomputer that is the fourth most powerful in the world, and has the power of 82,000 processors. An attempt by this supercomputer to mimic the activity of the human brain was only achievable for a one second period. As humans we are continuously bombarded with sensory information that needs to be interpreted, sorted and combined in a meaningful way to make a decision. We do not think about how much data is being generated and then processed by our brain. This is what has set us apart from computers in that computers are good at processing simple datasets but when it comes to processing a multitude of sensory inputs and combining them computers cannot compete with the human brain.
This is why resources are being allocated to bridging the gap between the abilities of the human brain and the processing power of data by computers. As this gap narrows this opens up the possibility for new and improved technologies of the future. Our ability to be connected to a multitude of people and be able to find information in a timely fashion all depends on the ability to process large amounts of data.
This is especially true within education where students during the course of school year create a huge amount of data that if harnessed will provide us with greater insight into a student. Information like attendance, time spent on assignments, quality of assignments, education history, best/weak subjects and much more could all be analyzed together on a regular basis to provide insight to a teacher or to personalize a student’s learning.
The company Knewton, who has a mission statement of bringing personalized learning to the world, wrote how they divide educational data into five different types. It would be these types that would then be analyzed by Big Data.
- Identity Data: who you are, what permissions you have and general demographic information.
- User Interaction Data: How long does it take to answer an online question? How long are student’s logged on? What is their progress?
- Inferred Content Data: How well does the student’s answer compare to others? How many students answer a question correctly? How beneficial is that question to learning? How can you measure learning if students are given this question?
- System Wide Data: Attendance; grades; student records and more. This is usually accessible by most teachers today.
- Inferred Student Data: what does this student really know? Why did they answer a question incorrectly? Was it a bad day or other reason as other days they showed that they understood the concept?
Many of today’s learning analytics take solely user interaction data and combine it with simple rules to create personalized learning. It will be the introduction of Big Data that will impact this as it will allow the additional processing of inferred content data or inferred student data depending on the situation.
Big Data is definitely linked with learning analytics as Big Data boosts the amount of data that can be analyzed at any one time. For example instead of each question being analyzed you could have a student’s entire academic profile and all of their work being analyzed to determine the student’s level of understanding. As computing power and the ability to process data increases the uses for Big Data within an education system will only be limited by how we implement it. It will also have a great impact upon mobile technology as the information that is available at our fingertips will only increase.
References:
Big Data. (n.d.). Retrieved 10 26, 2013, from Wikipedia: http://en.wikipedia.org/wiki/Big_data
Ferreira, J. (2013, 07 18). Big Data in Education: The 5 Types that Matter. Retrieved 10 26, 2013, from Knewton: http://www.knewton.com/blog/knewton/from-jose/2013/07/18/big-data-in-education/
Funge, J. (2013, 10 01). Why the big data system of tomorrow will mirror the human brain of today. Retrieved 10 26, 2013, from Venture Beat: http://venturebeat.com/2013/10/01/why-the-big-data-systems-of-tomorrow-will-mirror-the-human-brain-of-today/
I have a colleague who works with learning analytics — within a large institution like UBC, it sounds like there are political, bureaucratic, and of course technical hurdles to overcome before data can even be collected. Once this happens though, I’m told there’s a lot to be excited about…
Originally posted by @jfouellet Jan 11 2016
As presented, I believe there is a lot of potential in this kind of informatino for a learning continuum. The assesser come to better understand it’s pupil and can provide stronger assessment and better feedback. The evaluation becomes customized for the learner and brings a lot more opportunities for success.