Sunday, March 26, 2017

CST 373 Week 4

Scrapbook 4 - Ethical Dilemmas Surrounding Self-Driving Car Development

Uber suspends self-driving car program after Arizona crash by Gina Cherelus of Reuters (PDF archive)

Summary

Uber has been testing self-driving cars in Arizona through a pilot program they launched February 21st, 2017. This program allowed users to hail autonomous cars through the Uber ride-sharing platform, with two operators in the front seats that will step in when the car is unable to handle the situation. The autonomous ride-sharing vehicle saw its first accident March 24th, 2017 when a human-operated vehicle failed to yield to the Uber self-driving car while making a turn. The situation and investigation into the accident caused Uber to suspend the pilot program until further notice.

Reason Chosen

As interest in self-driving cars has become more prevalent and companies are racing to produce their own autonomous cars, real-world tests have become concerning and, at times, fatal. This is an interesting subject since we are observing how technical innovations may impact our regular routine of driving our own cars. These developments are also going to have a large impact on our workforce. If/when self-driving cars begin to populate the roads, there will be fewer employees necessary for delivery trucks and more needed in technology.

There are arguments in favor of self-driving car development that are the same as those against this. Both positions share the concern of safety on the road. Those in favor believe that there will be fewer accidents on the road and those that oppose believe there will be an increase in accidents. Because of these reasons, watching the development of these cars and their ethical battles is particularly interesting and sometimes concerning.

Ethical Implications and Personal and Social Values at Stake

The ethical implications with Uber running this pilot program primarily hinge on if they're doing it in the most responsible way possible, ensuring the safety of their customers. In late 2016, Uber had decided to not obtain permits in California that designated their cars as test cars, which ultimately lead to their car's California registrations being revoked. Following this, they began their pilot program in Arizona. Many stories have been released recently regarding ethical issues within the company. They've developed a reputation for not asking for permission to implement a service or intentionally deceiving officials and the public in an effort to expand their business.

By forcing themselves into certain businesses and practices, with what appears to be little concern for ethical responsibility and safety of others, the Uber brand is suffering. Halting their service in Arizona is a wise decision for now, but when will they resume it? Will Uber make improvements and add additional precautions to ensure this doesn't happen in the future? There were additional employees in the car that were supposed to take control when the car was presented with a difficult situation. Is it difficult to switch to human-operated and if it is, are those employees prepared to take over at any given moment?

Even if improvements are made, it's unlikely Uber can guarantee something similar won't happen again. More problems tend to arise as development and innovation progress. When there are two obstacles for a car, the software has to make a decision that decides on the best option. What if there are many factors? If the cars aren't ready for real-world situations and need more development time, would Uber admit that? It all raises the question if money or safety is more important to large companies, like Uber.

Source Credibility

Reuters is a well-known news source that was first established in 1851 and is headquartered in London, England. Their publication is global with 12 supported languages. They additionally have their own handbook that is designed to help them produce fair and reliable content.

Gina Cherelus is a reporter based in New York that has worked for Reuters for the past year as a U.S. General News Reporter. She obtained a degree in Journalism and Graphic Communication from Florida A&M University.

Sunday, March 19, 2017

CST 373 Week 3

Scrapbook 3 - Should Google Be Held Responsible for Protecting User Privacy?

Judge OKs warrant to reveal who searched a crime victim’s name on Google by David Kravets of Ars Technica (PDF archive)

Summary

A victim of identity theft in Edina, Minnesota had nearly $30,000 stolen from his bank account by someone using his identity. They forged the victim's passport using a photo that appeared from the Google search engine when the victim's name was searched. The Edina Police Department initially sent Google a subpoena to gain user information details from the searches performed with the victim's name. Google rejected the subpoena and the Edina Police Department requested a warrant from the courts to access user information from Google. The courts granted the warrant. Google has indicated that they are fighting it.

Reason Chosen

This topic is particularly relevant to the discussions we're having in class. The warrant approved by the courts would provide the government with proprietary user data and the information requested by Google's users. This week in class we're discussing anonymity online and if governments should respect that privacy or require users to be tied to their government identities.

Ethical Implications and Personal and Social Values at Stake

The ethical implication that this case has is one where the government (or Edina Police) could breach the privacy of those utilizing the Google search engine. Additionally, it asks us to consider whether Google, a large U.S. company, should be responsible for protecting the privacy of its users. To protect themselves against any harm like this, a user could access a public unprotected computer, such as one at a library, but should that be necessary? If this were to become normal, would we see a trend to require users to input their government credentials to access search engines and other websites like this?

Many people utilize these search engines with the expectation that their search information will not be released to outside parties, like a government. If this were the expectation, we would expect any search performed to be held against us. Doing research for a paper may cause a student to search something completely morbid but with innocent intentions. This could result in unthinkable consequences. Incriminating cases, where completely innocent people become murder suspects due to their search queries, or similar (maybe not so extreme cases) could become more frequent.

Source Credibility

Ars Technica is a publication geared toward those interested in technology. It was started in the late 1990s and has become a trusted source for technology and related policy news. Ars Technica was acquired by Advance, the parent company of Conde Nast, in 2008 and has since expanded to the UK.

David Kravets is a Senior Editor at Ars Technica with previous experience as a Senior Staff Writer for Wired magazine, a Press Secretary for the California Department of Justice and Legal Affairs Writer for the Associated Press.

Tuesday, March 14, 2017

CST 373 Week 2

Scrapbook 2 - Cloudflare Bug Exposes Unintended Information

Summary

Many large name websites, like Fitbit, Uber, and OkCupid, were utilizing Cloudflare's SSL certificates for their website security. Cloudflare had a major vulnerability exploited that caused requested endpoints to return additional data in the response from other websites. Cloudflare acts as a middleman when performing requests. So, when a request is made to a website behind Cloudflare, it passes through Cloudflare at the time of the request and at the response. The bug exploited from requests that returned HTML and the issue was in their parser. If a website response was HTML and there were mismatched HTML tags, Cloudflare would incorrectly parse the HTML and return additional information from its cache. This cache could contain any set of data from any other request. While the results could vary, they were cached in search engines like Google and Bing. Cloudflare worked quickly to resolve the bug, but the data was still cached for some period of time in these websites (or search engines) that scrape website information. This was a very serious issue that may have impacted a large number of users.

Reason Chosen

The Cloudflare "Cloudbleed" vulnerability was very big news recently and really highlighted the issue of using a third-party service to take care of a website's security. The impact was also large and they were unsure of who all would be impacted by this.

At my work, particularly, we had clients that were utilizing this service and it sent some of my coworkers into a bit of a frenzy. Not all of our clients use this service so it didn't impact many of us but it was extremely relevant and discussed a lot. It was also a good reminder to really take into consideration what third-party services are being used for and if using them is really in the best interest of the website users.

Ethical Implications and Personal and Social Values at Stake

This situation highlights the kinds of problems that can occur outside of the scope of a single code base when relying on third-party providers to handle security for your website. As a company needing to handle SSL certificates, passing this responsibility off to another is an ethical issue when one needs to be concerned about protecting their user data. Users are trusting the websites they utilize to do this and do it well. It's troubling to know that so many websites were utilizing this feature and that such a small issue can cause such a large problem for individual people.

This GitHub Gist has a list of websites that were using Cloudflare and it was recommended that users change their passwords for all of them.

Source Credibility

Wired is a well-known, technology-focused magazine based in San Francisco, California that has been active since 1993. They provide detailed articles surrounding relevant issues in technology.

Lily Hay Newman is a Security Staff Writer for Wired and has previously worked at other notable magazines and news organizations.

Tuesday, March 7, 2017

CST 373 Week 1

Scrapbook 1 - Are They Listening?

Summary

A suspected homicide took place in a home equipped with smart devices. Among these devices was an Amazon Echo device. Authorities seized the device from the home and served Amazon with a warrant to obtain any recordings from the device, citing that they expect Amazon to host recording from the device that may provide assistance in the case. Amazon claims that they only keep recordings that consist of the command that the device hears. These recordings begin with the specified trigger word, "Alexa". They also claim that the user can delete the recordings through their smartphone application and while they're always listening, they don't record any additional information.

Reason Chosen

This situation highlights something close to me, as I have Amazon Echo products in my home. I keep one in my living room and another in the bedroom. They're used adjust the lights (on/off/dim) in each of the rooms. This article came out shortly after I had purchased the first product and it was a little worrisome. I was not worried because I was going to plan a murder but because of the other implications that it could have. Some of us don't see our daily activities as something to guard as private and other do. However, I live with my boyfriend and owning one wasn't just a decision to make for myself, but for him too. It's worth spending an additional moment to consider who else could be impacted by these purchases and if they would be okay with it.

Ethical Implications and Personal and Social Values at Stake

While this article was primarily focused on the police attempting to use a warrant to gain recordings from the Amazon Echo device, I'd like to place more attention on Amazon themselves and if what they claim is true regarding how they store the recordings.

Amazon claims that they only store the commands that are initiated with the trigger word, "Alexa" and that the owner of the product can delete recordings from their Amazon Echo application. There are a few moving pieces in this claim. First, the software for the product is closed-source and we don't know how Amazon is actually handling the data (sound processed in the cloud). We are completely relying on their claims. Because we can't verify how our data is being handled, we can't guarantee that the recordings are actually deleted when the user requests that they are not actually storing additional information.

Many people purchasing these products are unfamiliar with how "the cloud" works and that it is used at all with these products. They may feel differently about them if they knew that everything they said was being transmitted to a server outside of their home for processing, or understanding what you said and saying something in response. Once their data leaves their network, they no longer have control over it and can't guarantee the safety of it. They're trusting Amazon to handle their data respectfully and do anything malicious with it.

What's interesting to consider is if and how this should be handled. Is it okay to have so many products, in our own homes, listening? Could or should this be regulated? How do we know who we can trust? Perhaps there could be some sort of required warning about it? We don't have the answers but this is definitely pushing us into a new direction.

Source Credibility

The Washington Post is a well-known news source and is primarily circulated in Washington, DC. The author is a legitimate full-time journalist with The Washington Post. It is notable, and noted within the article itself, that the owner of The Washington Post is the chief executive of Amazon. However, this point does not diminish the value of my analysis.