Over the past few decades society has become largely
dependent on technology both professionally and personally and as technology
continues to advance at this rapid pace organisations and companies are
required in many cases to remain up to date with these advancements in order to
stay relevant. The constant emergence of new technologies creates the
opportunity for organisations to gain the upper hand on their competitors by
implementing them in ways that their customers would find innovative and
useful. There are however factors that need to be considered, as much as we
would like our favourite brands to incorporate the newest and coolest emerging
technologies it should be understood that these organisations must assess and evaluate
the impact and risks that are associated with such implementation. That will be
the primary focus of this article, throughout I will discuss and evaluate the
risks related to implementing a new computer system, the impact of developments
in computing over the last 10 years and what the impact might be of emerging
technologies.
Explanation
of the impact that developments in computing have had on an organisation.
We have reached a point in time where technology is used in
within many aspects of life with the majority of educational, personal and
professional activities relying on some form of computing. This level of
dependency has come along with the constant development of new hardware and
software tailored to suit the needs of consumers over the years. It was not too
long ago that computers were a high-end luxury item that only the wealthy
possessed however as time has gone on and technology has become more accessible
to the average consumer. The ever-increasing use of computers and other
technology comes mostly from the fact that with each development there are more
tasks that can be performed, or older tasks become easier. When personal
computers were first available they had little functionality and were not of
much use to those who were not interested in computing or require one for their
job.
In order to get the most out of the devices we use in
computing, there comes the need to constantly improve the tools that we use.
Hardware and software are the two primary components that make pretty much all
technology work, and advancements in one field often do not mean much if the
other is not moving along at the same pace. Creating a sophisticated program is
a great achievement however it means almost nothing if there is no hardware
available to run the program. Likewise creating the most capable computer means
very little if there is not software capable of utilising its power. This is
the reason that both areas will advance together at a steady rate; as file
sizes increase, storage capacity on drives will increase, as power consumption
increases, battery capacity will increase and so on. The improvements in both
of these areas has caused for a number of trends to take control of the
direction that computing is heading, with one of the main directions being in
mobile computing.
Laptops, tablets and mobile phones are all examples of the
way in which people carry around very powerful computing devices in their daily
lives. Developments in hardware have allowed for manufacturers to condense
physical components so that they are able to fit into portable devices
meanwhile still being able to perform to the same of similar standards. Similarly,
software advancements have allowed for programs to be optimised for portable
devices, application and battery management have had to be improved to allow
for using devices for a significant amount of time without power coming from an
outlet or external battery source. The focus on mobile computing has had a
positive effect on productivity in a number of areas, previously many jobs
required staff to be in the office to access company files and complete their
work however this is where the improvement of software and hardware have
changed the work environment. More frequently than ever you will now see a
people using laptops and other portable devices on trains, planes and in cafés
to complete their work, this is as the devices have same capabilities of the
devices that they have in a traditional office. Cloud computing, remote access
and virtualisation are just a few of the computing techniques that can be
utilised to transform a laptop into a fully-fledged desktop capable of
completing even the most demanding of task. In addition to this, high end
laptops with the latest developments in hardware and software are able to
handle very intensive applications such as those used for video editing or 3D
modelling.
A huge development in the world of computing came with the
introduction of the cloud as a way in which we are able to store, access and
distribute information. Cloud computing provided a valuable way for software to
be distributed and accessed through the use of the internet and also allowed
for new opportunities in terms of collaborating with people from other parts of
the world. Both for personal use and professional use, cloud technology allows
for organisations to become more flexible in the way that they work. With the
potential the cloud computing provides, no longer are staff limited to single
locations when looking to access certain files or upload data to a company server.
There are however security concerns regarding cloud computing, specifically
when it involves the transfer of sensitive information such as that of
customers and staff. The constant transfer of data over networks provides cyber
criminals with the opportunity to intercept and alter data before it reaches
its destination. These security risks are part of the reason that many
organisations have chosen to mix the use of onsite networks and cloud-based
networks, a technique that is often referred to as hybrid cloud computing. This
method of storing data allows for organisations to become more flexible in the
way that they are able to store data, sensitive data can be stored locally on
site whereas less sensitive data can be stored in either a private or third-party
cloud.
Despite the power and sophistication of the hardware and
software that is in current use there will become a time when it is looked upon
as being primitive as is the case with the majority of technology. In order to
truly take advantage devices, they need to remain u to date with the latest
releases as they become available. Updating software application or operating
systems is a fairly simple task as the update will often be pushed by the
developer and will rarely require much work from a client perspective. With the
exception of complete overhauls of software updates will also look to avoid
compatibility issues by ensuring the software is able to run on hardware that
is capable of running its predecessor. Unfortunately, the same cannot be said
for hardware; developments in this area tend to be less frequent in comparison
to software and are also less likely to be free. Whereas software is often
updated, hardware is upgraded; this means that rather than changes being made
to an existing product, a new and improved product will be released for
purchase. As the sophistication of software applications become more demanding
organisations should ensure that they are upgrading the hardware that they are
using to maintain compatibility and performance levels throughout all of their
systems. Many organisations, when purchasing new hardware will look to plan for
the future by purchasing components and system that exceed the minimum
requirements of the software they use, this way they are not required to update
their systems every time they are required to update or upgrade their software.
Future proofing systems is one of the key ways organisations can plan for the
future whilst also attempting to keep costs low which is usually one of the
primary goals.
Naturally the majority of organisations will have
competitors who operate within the same area or provide similar products and
services and for this reason it is important for them to put effort into
gaining, maintaining and potentially improving the competitive edge over
organisations in the same field. Due to the ever-increasing role that computing
plays in even the most basic business operations, ensuring the systems in place
are up to date and as efficient as possible is key in maintaining to gaining
the lead on potential competitors, the quick way in which markets are changing
means that other organisation can become eager to utilise computing in new
ways. Online retail is a prime example of a way in which market demands have
changed the way in which organisations operate and market whilst also
increasing the amount resources used for computing. The convenience and easy
access of internet enabled devices has been one of the primary reasons for the increasing
popularity of online shopping among consumers and in most cases organisations to.
Many of us are used to the fact that we can shop from the comfort of our own
house and get next day delivery so when an organisation chooses to sell items
online it is not seen as using technology to take advantage of new markets,
this however was not always the case.
It was not too long ago that the concept of purchasing items
online as opposed to visiting a retail store and there was once a time where
choosing to sell items online carried a much greater risk and more foresight
into what e-commerce could develop into. Amazon is a prime example of a company
that used the development of new technologies to take advantage of new markets
and opportunities as launching a marketplace that had no physical stores was
very unusual at the time. Fast forward to the present day, the internet is
flooded with online only stores as the presence of online shopping becomes
harder to ignore and due to their farsightedness companies such as Amazon and eBay
who chose to embrace online shopping early are worth billions in the current
economy.
Despite the growth of online shopping, many organisations
understand that traditional shopping in physical location still has a relevant
place in society. Understanding the balance between the two has allowed for a
number of retailers to reap the benefits of both avenues to maximise profits,
the diversity of consumers in this day and age means that catering the most
people requires options. Physical stores are still the preference for a number
of people as technology can often seem cold and unforgiving whereas stores with
human staff members who are able to assist are perceived to be more customer
friendly. Returns and product issues are also areas in which online retail
stores have not yet been able to compare to physical locations in terms of ease
for the average consumer, the comfort of human interaction is often the reason
people choose to visit physical locations.
Ensuring that their operations are cost effective is key to
the success of organisations and as developments in technology continue to
progress it has a knock-on effect on the cost of certain systems. As new
products and services become available, previous iterations will decrease in
value and therefore not require the same cost of upkeep, organisations will
measure their cost against their requirements and calculate where technology
can be used to cut costs without compromising quality or company values.
Customer service is a common area in which organisations technology can be used
to cut costs whilst also improving the way in which customers can communicate
with the organisation. Webchats, video calls and automated services are just a
few of the developments that have allowed for customer service to thrive whilst
also maintaining a reasonable cost.
Automation is another way in a wide variety of organisations
work to maintain output costs when providing a number of services, the
increased functionality of technology has allowed for automated machines to
exceed humans in both productivity and precision. The use of automation has
been used frequently in a number of industries and the constant improvement of
new technologies has allowed for the process to become more efficient over
since its original inception.
Explanation of the likely impact of an emerging technology on
organisations
Whether we like it or not technology and computing has
gotten to a point at which development and emerging technologies are popping up
at a rapid rate. Devices ranging from mobile phones to television are being
released every day, each one incorporating features that were not included in
the last iteration and more often than not nowadays they are all collected.
The Internet of Things (IoT) refers to the ever-growing
network of physical devices that are connected through the use of networks with
each of them possessing their own IP addresses to make the connections
possible. Fridges, heating systems and washing machines are all devices or
systems that are often found in the common house hold however traditionally
they do not require the internet or any form of network connectivity to
function, however that’s beginning to change. Smart features within traditional
household items are becoming more and more popular among consumers as the
features improve, many devices are now at the point at which people can control
various functions of another device from a mobile device or other central
location.
IoT can provide a number of opportunities from the
perspective of an organisation and although it is a fairly new development,
many have already looked into small ways in which it can be used to improve
productivity. In its most basic form, IoT devices can improve office activities
by altering the temperature from your desk or boiling the kettle without having
to leave your seat, simple activities that although they are fairly quick,
distract people from doing a number of activities that relate to their job. On
a larger scale a farmer could use such technology to monitor weather activity
and trigger watering systems when the crops require it. Not only does this sort
of method increase productivity by freeing up man power for other jobs, the
precision of technology allows for tasks to become more reliably performed.
As with most technological advancements there is the concern
that security among such devices may prompt unwanted outside interference,
especially when communicating with critical systems. The risk of someone
gaining access to an internet enabled fridge is very low however systems that
provide more important services may become a target. In addition to access to
the system, accessing the data that is collected by the devices is something
that is of concern to people who look to take advantage of such technology. A
large amount of data can be gathered simply from monitoring the activity of
someone in their own home, data which is extremely valuable to organisations
who provide home based services such as electricity providers.
Generally speaking, IoT devices are in their infancy in
terms of development and applications, the security concerns surrounding the
data that is collected by such devices means that it is not yet fully accepted
by many consumers and organisations. In order to progress, the security
concerns will need to be addressed however once done IoT devices have the
ability to open up a world of opportunities for integration between devices.
Many smart home devices such as the Google Home or Amazon Alexa have already
made significant advancements in such areas. These forms of digital assistance
no longer only with digital tasks such as updating a calendar or sending a
message, by integrating them with other networked devices they are able to
control systems that control heating or electronic systems.
From the perspective of an organisation another area in
which significant advancements are being made is within the area of automation
which at present is one of the most fast-moving areas of computing in today. The
ever-expanding range of opportunities that come with the concept of automation
is something that can be very appealing for a number of companies. The industrial
revolution provided the foundations of the idea that machines would be able to
perform tasks that once required humans to complete them, in modern times that
is still the case. Machines are now able to make use of a number of
technologies including robotics and artificial intelligence to carry out tasks
more efficiently than a human. Not only this but the precision of the machinery
that is used in modern development factories means that repetitive tasks such
as the mass production of products is less likely to produce faulty products.
Whilst the use of industrial robots and other related technologies is very
promising in terms of the ability that is on offer, they are still relatively
expensive to get up and running, not only this but they also require highly
trained staff to develop, implement and maintain the systems. Whilst using such
systems removes the human element in one way it also provides more opportunity
when it comes to the introduction of higher skilled job roles.
In addition to robotics that work independently from humans
there is also the concept of using them in combination with humans.
Exoskeletons are an example of an idea that has received significant attention
for the possibilities that it provides when attempting to enhance human
abilities through the use of robotics. Originally conceptualised for military
applications, exoskeletons can come in a number of shapes or sizes and can
respond to a number of different input methods such as speech or movement to
aide in a number of circumstances when human strength or accuracy is
insufficient. The interest in such technologies has also meant that new
applications for exoskeletons have become more promising, the idea of using
these systems to benefit the health sector by looking into the using them to
help people who suffer from mobility issues or paralysis.
Analyse
the risks related to implementing a new computer system in an organisation.
In the age that we are living data is everything;
organisations thrive on gathering information about consumers to better know
how to provide various services and products that will be appealing to their
audience. Big Data is a term used by many organisations to refer to the data
sets that are capable of holding huge amounts of data, the amount is of such as
size that it is too large for normal processing applications handle. This data
can consist of literally anything relating to anyone or anything and can be
gathered in variety of ways, information on locations, ages or addresses are
just the tip of the iceberg when it comes to the amount of data that could be
held on one person.
In order to store such high volumes of data, specialist
software is required to house the data, technologies that are used to do this
are referred to as Data Warehouses. Different to databases which are most
commonly used to store data from a single location, data warehouses are used to
store huge amounts of data bits of which could have been gathered from
different places such as online, surveys or other market research. As there is
so much information being held in these data warehouses, there are specific
techniques required to retrieve this information when required, this is
referred to as data mining which is defined as the practice of examining large
pre-existing data sets in order to generate new information. Data mining tools
allow enterprises to predict future trends by analysing the existing data and
using it predict certain future trends.
The use of current technologies has allowed for data to be
captured in new and sometimes concerning ways. Financial transaction, social
media post and search engine searches are just few of the ways in which data
can be gathered on consumers, some of which are seen as intrusive by a number
of people. The data gathered however is used to allow companies to formulate
new products and services to meet consumer needs, forecasting new trends is one
of the primary uses for big data sets. Another use is to analyse the success
and consumer opinion on previous services or products that have previously been
available and assess whether or not it is worthwhile to continue.
The primary features of big data include the three V’s
(Volume, Velocity and Variety) as well as storage and processing. Volume simply
refers to the quantity of data that is being generated and stored, due to the
fact that data can come from so many places in current times means that there
is more of it to store which then leads into the next V, Variety. As stated the
amount of data being collected is coming from an increasing number of sources
and therefore new data is being captured that varies from many of the other
traditional sources that have been in use. The third and final V, Velocity
refers to the speed at which data is generated. Due to the variety of forms that
data can be collected through, data is being generated and collected faster
than ever and is it being generated faster it is being received faster. The
internet has allowed for data to be collected and sorted and stored almost
within real time whereas it would have taken much longer in previous years.
Due to the huge amount of data that can flood in at such a
high speed, it is not hard to see where there may be issues when it comes to
storing such large quantities of data. As the amount of data being stored is
too much for conventional applications such Microsoft Excel or Access, there
are specialist technologies that are developed specifically for holding and
sorting through big data sets. An example of such technology is Apache Spark, a
program that features built-in modules for streaming, machine learning, graph
processing and SQL support all of which make it one of the more prominent tools
for big data processing. With support for the majority of languages used for big
data including Python, Scala and R it has been referred to as the fastest and
general engine for big data processing.
Apache Spark also features technologies that allow for it to
be deployed either in an onsite data centre or on the cloud as an alternative.
Use of the cloud gives organisation the ability to use the software without the
need to acquire and set up the necessary hardware required, which can not only
be expensive it can also be time consuming to set up. As with most cloud-based
software there are drawbacks such as relying on a solid internet connection and
security risk however in many cases the benefits far outweigh the negatives.
Not only does it allow for costs and time restraints to be cut in half, it also
allows for better access to data from different locations and also quicker
access to new features and functionalities that may come in the form of
updates. Once it is up and running software application such as Apache Spark
can be used to perform mathematical equations that will take all of data that
has been gathered and work to sort through it as a means to understand trends. Once
connections and links have been made between different groups of data, it can
be used as a way to evaluate and predict the probability.
Similar to the actual data itself, data warehouses consist
of a number of key features themselves to ensure that that data sets can be used
as opposed to being large amounts of useless information. The first feature we
will look at is subject orientation which seeks to apply some form of logic to
the data that is gathered within data sets. When it is initially collected,
more often than not the data will be retrieved in a manner that will likely
have no discernible pattern. Subject orientation allows for the data to be
stored and ordered by a defined topic or theme so that when the time comes organisations
will be able to analyse the information much easier.
Another feature that is put in place to both ease the task
of analysing data and improve performance is data denormalization which is the
process of grouping together data or adding redundant data to boost performance
if a query is run. The grouping of data within a table can assist in speeding
up analysis and improving performance, this is as when a search is performed it
will be carried out over specific or defined parts of the data set as opposed to
searching through all of the data. The grouping of data can be done in a number
of ways and can link various pieces of data, for example people who purchased
the same product or service could be grouped together so that a search could be
run on data pertaining to these specific people.
The next feature of data warehouses is non-volatility which
is put in place so that organisations can be sure that all of their data will not
be lost should something along the lines of a power outage take place.
Non-volatile storage refers to a storage medium that prevent the loss of data
in the event that the flow of power in switched off or interrupted, it is the
opposite of volatile storage which loses all data when switched off, RAM being
a prime example. Ensuring that there is a storage medium that is not volatile
helps to ease the worries of organisations and allow them to be confident that the
data is stored correctly and safely.
On the subject of data storage, historical data is something
that is increasingly being retained by organisations so that it is readily available
should it be needed at any point in the future. By law organisations are
required to retain some data for certain periods of time however in order to do
so it would require a large amount of space to keep it. We are now at a point at
which technical advancements has allowed for historical data to be kept without
taking up as much room as it once did. Analysis of historical data can also be
useful to organisations to understand previous trends and look into the results
of historic acts that resulted in success.
The use of queries is very common way for organisations to
sort and analyse the data that they have collected. There are two kinds of
query that can be run, a planned query which involves applying a series of
given steps to the data for the purpose of locating the best data for a task.
The second type is an ad-hoc query which is a form of query that will be
generated as and when the need presents itself. Ad hoc queries will use a set
of parameters that are given by the user and then return the best result based
on the given task. Both forms of query are common for organisations to use when
analysing data with the difference between the two being primarily down to the
nature that one is planned and the other being used when required.
The last primary feature of a data warehouse is the ability
to control data load. When approaching the analysis of a data set, organisations
need to be careful about what data is returned what requesting it from a data
set. Due to the sheer amount of data that can be held within a single data set
it is not unlikely for information that is similar or relates to the same
person, for this reason it is important for the data that is returned is related
to the query that was made. Controlling the data that is retrieved is one of
the easiest ways to reduce processing power and allow for performance
improvements.
In order to make any of these features be of any use, data analysts
are required to look through these data sets in order to make sense of it. As
previously mentioned this is referred to as data mining, a process that can
assist organisations so successfully use captured and processed data as a means
to predict future trends among consumers. There are a number of different
techniques that can be used as a method to comb through all of this data, usually
consisting of complex mathematical techniques such as a cluster analysis. A
cluster analysis involves the process of dividing data in to groups or cluster
based on information that relates them, this can be anything from a shared
interest in a product to a person’s date of birth. Alternatively, anomaly
detection works to identify data that falls outside of the ordinary type of
data found in the data set. This type of data mining is commonly found within
areas such as fraud detection so that transactions that fall outside of the normal
ranges of a person spending can be flagged and investigated. Data visualisation
is a technique that is used to display information gathered in a form that is
readable and digestible for data analysts, most commonly in the form of graph
or charts that make it easier to spot trends or patterns in the data.
For the most part data mining is used as a method for
retailers and organisations offering services to identify consumer preferences and
use this information make informed decisions regarding various factors in their
business model. Despite this data mining also has a number of different applications
that have become just as useful in a range of different industries, as
previously mentioned the banking industry has a strong use for big data in discovering
trends within transactions or deciding whether or not to grant money to customers
based on their previous data. Various scientific researchers will also make use
of large data sets to analyse various sets of results.
Negative aspects of internet use
Since the inception of the internet it has quickly grown in
the world-wide hub of information and content that we know today, however it
would be naïve to believe that the internet did not have its negative aspects.
Along with all the good that has been done through the use of the internet in
order to get the full picture we must look at some of the bad things that it is
used for.
A lot of the danger and negative effect of the internet come
from the nature that access is not restricted, given they have suitable equipment
and access to a suitable network pretty much any one able to access the internet.
The introduction of web 2.0 has also meant that people are now able to interact
with each other over the internet which has led to a number of un wanted
results. Cyberbullying and trolling are two forms of harassment that take
advantage of the anonymity that comes with the internet, through the use of social
media and other social platforms people are able to post and publish content
that can be very harmful to others. Whilst definition of cyberbully and
trolling can differ it often comes down to a matter of opinion as to which is being
applied in certain cases, the general infancy of the internet as well as the
presence of social media has meant that new rules and regulations are being
created ongoingly to combat such behaviour online.
One form of cyberbullying that has had an upsurge in recent
time is the use of revenge porn which is the act of revealing or sexually
explicit images or videos of a person posted on the Internet, typically by a
former sexual partner, without the consent of the subject and in order to cause
them distress or embarrassment. It is because of such revenge practices that
laws are beginning to be put into place to combat such acts however it can be
difficult when the internet is not controlled by one single organisation or
country. Another issue that the internet has faced for many years is the availability
and trading of illegal material over both private and public networks. From
copywritten content to the sale of weapons and drugs the internet has become
the prime place to find items and services that for lack of a better word are
illegal, access to such content is often done through tools that grant access
to the dark web. The dark web is the World Wide Web content that exists on
darknets, overlay networks that use the Internet but need specific software,
configurations, or authorization to access. The use of such technologies if
often used by criminals in order to communicate under the radar without arousing
suspicion, the dark web is a key tool for terrorist groups to not only communicate with one
another but also to recruit new people to support their cause.
The
impact that implementing a new computer system can have on an organisation.
As consumers we would often prefer it for organisations to
implement emerging technologies as they become available allowing us to have
access to the most up to date features and functions on our various devices and
there is little thought on what risk the company could ensue as a result. When
implementing a new computer system one of the most important factors to
consider and often the one that organisations will evaluate first is how secure
the system in question is. The level of security a system has can be the
defining factor on whether an organisation decides to implement a new system or
not. As society becomes more reliant on computer systems on a daily basis
cybercrime has become a much more relevant threat that organisations should be
aware of.
The security risks surrounding the implementation of a new
computer system can vary depending on what system is being put into place and
for which organisation, for this article we will consider 2 different
organisations with one being a bank and the other being a supermarket. On first
glance these organisations appear to be very different and are likely to use a
number of different computer systems, however the risks that they face can be
very similar. Both organisations are capable of possessing personal and
sensitive customer data such as names, address and financial information to
name a few. Data such as this will often be the priority of the organisation
and so it is unlikely that a computer system that will comprise the security of
this information would be implemented intentionally. We are currently living in
a time when cybercrime is becoming more of an issue as society continues to
migrate so much of our information and personal data onto computer systems both
personally and professionally. In 2017 one of the world’s largest credit bureau
Equifax was penetrated by cybercriminals who managed to steal the personal data
of over 140 million people; this was considered one of the worst data breaches
of all time largely based on the amount of sensitive data that was exposed.
This is not to say that organisations should shy away from the implantation of
new systems, quite the opposite, it is just an example of the scale security
breaches can escalate to. New systems are often targeted by cybercriminals as
they are more likely to have backdoors and vulnerabilities that have not yet
been discovered or patched, this risk is only amplified if the organisation
that uses the system holds sensitive data like a financial institution would.
Now that some of the security risks that could be associated
with the implementation of a new systems has been established, it is also
important to understand and evaluate the effects that the company could endure
as whole; especially on the people who work within the company. In order to do
so it important to first establish some of the key positions that can be held
within numerous organisations. People within organisations will often fall into
one of three categories; staff, management or owner and whilst there are a
number of subcategories that job roles can fall under, for the purpose of the
blog we will look at them in the simplest form. Consumers will often not put
any though into how much new systems can affect certain job roles, however there are knock on
effects that can be both negative and positive to people in different
positions. Looking at staff members first; this category is the broadest and
will often include the majority of people working for the organisation. In the
example of a supermarket, staff will include the majority of people who work in
any of their stores, warehouses or offices and will include staff management departments
as well. These job roles that are closer to the lower end of the organisations
chain of command are more likely to see the immediate difference within an
organisation that recently implemented a significant new system. As these
employees will often have a closer relationship with customers, often
interacting with them directly, many issues surrounding the implementation of a
new system will often start with these staff members and work its way up the
chain of command should it be relevant.
An example of a fairly new system that has been introduced and
had a large effect on both individual organisations and the shopping industry
itself is the self-checkout service. The option of self-checkout in a number of
retail and grocery stores has been available for a while now and shows no signs
of slowing down, in 2016 there were an estimated 240,000 terminals worldwide a
number which is predicted to increase to 468,000 between 2016 and 2021. From
the perspective of people working in stores, there is no monetary gain for the
implementation of these services however it will more than likely alter their
actual job role. Take the supermarket Tesco for example, from personal
experience I know that at their 24-hour store in Bristol there are no manned
tills after 1am, instead staff are assigned alternate jobs such as stocking
shelves. The self-checkout service however remains open for late night shoppers
and there is one member of staff to oversee and assist customers. In previous
years cashiers were an essential part of practically any physical store and
although it was classed as a low skilled job in many cases it was still a job
that was needed to be filled.
Works Cited
Apple. (2018). iMac
Specifications. Retrieved November 22, 2018, from
https://www.apple.com/uk/imac/specs/
Bowsher, E. (2018,
January 11). Online retail sales continue to soar. Retrieved December
19, 2018, from Financial Times:
https://www.ft.com/content/a8f5c780-f46d-11e7-a4c9-bbdefa4f210b
ELMBLAD, S.
(2018). The Difference Between Software Updates and Upgrades. Retrieved
December 20, 2018, from https://www.thebalance.com/what-is-a-software-update-vs-software-upgrade-1294256
Everyday Mac.
(2018). Apple Macintosh Original (128k) Specs. Retrieved November 22,
2018, from Every Mac:
https://everymac.com/systems/apple/mac_classic/specs/mac_128k.html
Larson, S. (2017).
The hacks that left us exposed in 2017. Retrieved December 12, 2018,
from
https://money.cnn.com/2017/12/18/technology/biggest-cyberattacks-of-the-year/index.html
Lufkin, B. (2017).
Should Cashiers Be Humans or Machines? Retrieved December 14, 2018,
from http://www.bbc.com/future/story/20170512-should-cashiers-be-humans-or-machines
Mortimer, G.,
& Dootson, P. (2018). The economics of self-service checkouts.
Retrieved December 14, 2018, from
https://theconversation.com/the-economics-of-self-service-checkouts-78593
Unit 9: Imact of
Computing . (2018).