Monday, November 30, 2015

CST 205 Week 5

Week 5

This week has been pretty relaxed (maybe because it was Thanksgiving?). There wasn't nearly as much work as there has been previous weeks. The readings were geared toward debugging. All of the assignments were team projects and basically were compiled into one primary game. The game is a text based choose your own adventure style, like Zork. The idea is pretty simple, but we went through a few different ways to implement the project to make it easiest to develop on. As usual, meeting with the team has been pretty difficult... but we figured it out in the end. I still think we could use some guidance on pair programming. A team of 4 all typing at once in codeshare.io is a bit crazy and I think everyone would benefit from learning how to pair program properly when remote (one "driver" and more dialogue). We have the code for our game on our team's GitHub Adventure Game Repository.

Assigned Readings

Silicon Valley's Race to Hack Happiness by Emma Seppälä, Ph.D (Huffington Post)

This article discussed how people and companies are beginning to create or hold competitions to create applications to help with mental disorders and make people happier. The author explains that depression has increased and more people are looking for ways to make themselves happier, like going to yoga, reading self help books, and now looking for applications to boost their mood. These applications seem fine. Using technology for things that could benefit people seems like a good thing. However, there wasn't any data to back up if these applications are helping or not. Because of this, I'm hesitant to say that this is a good route to take to come out of depression at this time.

Monday, November 23, 2015

CST 205 Week 4

Week 4

This week the majority of our assignments have been focused on sounds. Becoming familiar with sounds and how they're digitized has been pretty interesting. I think I still need to practice understanding the correlation between bit depth and the value. Aside from the sound assignments, the rest of the assignments were focused on string manipulation and access in python. String manipulation has been pretty straight forward since it's similar to other string manipulation functions in other languages. However, splicing and splitting strings is a bit different in python (with the string[:3], string[3:], and string[3:3] syntax). The paired programming labs were a bit better this time with encouraging them to be done together, as a pair. Also, we had to wrap up our midterm project this week, which was creating filters on photos with certain themes. We had to do one that was CSUMB themed and one that was up to our own discretion. I chose a Buffy the Vampire theme. I'm pretty happy with that decision.

Assigned Readings

The Employer's Creed from Op-ed columnist, David Brooks (The New York Times)

Many companies hire people based on their (almost perfect) resume. This article asks that employers look for different things in potential employees, like obstacles they may have overcome. It suggests that these companies should prefer candidates that are different from the rest and challenges them to not follow the social norm of hiring practices. This sounds great and realistic, but that's just not how most companies are hiring (aside from focusing on the cover letter). I do think that you have to have that characteristic or special thing you can bring to a team in order to stand out from the crowd and get a job.

Sunday, November 15, 2015

CST 205 Week 3

Week 3

This week we learned about git and worked in our teams to create cards for Thanksgiving. I use git and GitHub regularly at work, so that hasn't been challenging for me to pick up. The most difficult thing about this week is working in the group with little direction on how to work together as a group. Also, the assignment doesn't feel geared toward collaborative pairing. We were supposed to create a card for each member of our group, leading us more or less down the path of creating our own cards. If we were all forced to create one, together I think it would have been a different story. It's also nearly impossible for all four of us to be at the computer at the same time. We also have a midterm coming up, but it seems like we will still be applying the same kinds of methods we've done up until this point, so it should be pretty straightforward.

Assigned Readings

Pair Programming (Wikipedia)

The Wikipedia page for pair programming currently describes pair programming as having two people working on one computer. the two people switch off being the "driver" and "navigator". The driver writes the code and the navigator comes up with various ways to do operations and acts as the extra set of eyes for bugs. It also details some of the benefits and studies done on pair programming. I have pair programmed quite a bit in the past and love the idea of it. Unfortunately, this course hasn't yet defined any way to go about setting up meeting times and student environments (tools) to facilitate remote pair programming. Instead, we're being directed toward a collaborative git workflow. It's extremely frustrating (from what I'm seeing) for the students because the reading we're directed to provides guidance on pair programming in a situation where the students are in the same room.

Facial Recognition Systems Turn Your Face Into Your Credit Card, PIN, Password (The Huffington Post) by Betsy Isaacson

This article briefly describes that it's possible for computers to recognize and distinguish human faces. A startup company, called Uniqul, released an ad full of hypothetical situations where they showed consumers that could pay with their face. So, there was no real transaction and just a button that the user clicks that says to accept the charge. The company is hoping to develop the facial recognition technology to do this. I'm not really sure how this relates to this course, but I guess it is showing us what we could potentially do down the line.

Autism And Google Glass: Teen's Software Could Help Users Recognize Emotional Cues (The Huffington Post)

Sension is another facial recognition product. This product uses faces to track and test the level of engagement an end user is experiencing and to provide a new way to play video games. This product ended up having an unintended application, which is to recognize different emotions that people are conveying at the end of the camera (found useful when dealing with autism). I'm also not sure how this relates to what we've been studying, but we will see.

How to Get a Job at Google (The New York Times) by Thomas L. Friedman

Google had nontraditional hiring practices. As a programmer, many people will go further if they can learn quickly and face challenges well. I know this is the accurate because I don't have a degree and work as a programmer. There aren't too many people that are in the field without degrees, but I can attest that the ones that are work very hard. In relation to this course, I believe this article was better suited for the ProSeminar course and doesn't really apply to a multimedia design course.

This is the Internal Grading System Google Uses for its Employees -- And you Should Use it too (Business Insider) by Jay Yarow

This is another article that seems to be a better fit for the ProSeminar course, but it basically explains how Google grades themselves on their own tasks. Prior to a particular set of time, they establish several (less than six) quantifiable goals. They are graded on a scale of 0 to 1, where 0.6 - 0.7 is the ideal result. A full 1 would mean the goal was too each and closer to 0 would mean that the goal was too difficult. Perhaps this was a subtle suggestion that we should try out Google's Objectives and Key Results (OKR) strategy for ourselves.



CST 205 Week 3 - Image Portfolio

Image Portfolio

Below are some examples of the functions I've written for my Multimedia Design & Programming course at CSUMB (in Python). 

Week 1 - Lab #3

Rose-colored glasses

The rose colored glasses filter looks at each pixel in an image and increases the red by 25%, decreases the blue by 25% and decreases the green by 25%. This gives the picture a nice, pink tint. Making the pictures pink and not red was a bit challenging.
def roseColoredGlasses(image):
  pixels = getPixels(image)
  for pixel in pixels:
    setRed(pixel, getRed(pixel) * 1.25)
    setBlue(pixel, getBlue(pixel) * 0.75)
    setGreen(pixel, getGreen(pixel) * 0.75)
  return image
original:
 result:

Negative

The negative filter looks at each pixel in the image and changes the RBG value to be the difference between the possible value (always 255) and the actual value. This results in the exact opposite of the original image. This one felt pretty simple and straightforward to me.
def makeNegative(originalImage):
  pixels = getPixels(originalImage)
  for pixel in pixels:
    setRed(pixel, 255 - getRed(pixel))
    setBlue(pixel, 255 - getBlue(pixel))
    setGreen(pixel, 255 - getGreen(pixel))
  return originalImage
original:
result:

Better black and white

The better black and white filter looks at each of the RBG values in a pixel within an image. It reduces red to 29.9%, blue to 11.4%, and green to 58.7%. The outcome of that is combined at set for each color in the picture. I don't remember thinking this was difficult at the time, but looking at it again makes me unsure that it's correct.
def betterBnW(image):
  pixels = getPixels(image)
  for pixel in pixels:
    colorVal = getRed(pixel) * 0.299
    colorVal = colorVal + getBlue(pixel) * 0.114
    colorVal = colorVal + getGreen(pixel) * 0.587
    setRed(pixel, colorVal)
    setBlue(pixel, colorVal)
    setGreen(pixel, colorVal)
  return image
original: 
 result:

Week 2 - Lab #4

Bottom-to-top mirror

This image manipulation iterates through all of the pixels on the bottom half of the image and updates each pixel that is the perfect opposite (on the top) of the current pixel to match, resulting in the bottom-to-top mirror image. The mirror manipulations all just required some consideration before implementing them.
def bottomToTopMirror(image):
  totalX = getWidth(image) - 1
  totalY = getHeight(image) - 1
  for x in range(0, totalX):
    for y in range(totalY/2, totalY):
      currentPixel = getPixel(image, x, y)
      topPixel = getPixel(image, x, totalY - y)
      color = getColor(currentPixel)
      setColor(topPixel, color)
  return image
original: 
 result:

Shrink

The shrink image manipulation creates a new canvas that is half the width and half the height of the original image. It then picks up every other pixel and copies it into the new canvas. The new canvas is returned, which makes this method appear to shrink the original image. The difficult step in this one was understanding what we're picking up and what we're copying to get the desired result.
def shrink(image):
  width = getWidth(image)
  height = getHeight(image)
  pic = makeEmptyPicture(width/2, height/2)
  for x in range (0, width-1, 2):
    for y in range (0, height-1, 2):
      color = getColor(getPixel(image, x, y))
      setColor(getPixel(pic, x/2, y/2), color)
  return pic
original:

 result:

Collage

The collage portion of the assignment took me a lot longer to do than all of our other assignments, mostly because I had to figure out the dimensions of the images after shrinking them to size and/or rotating them and then spend time planning where to place them and in which order since exceeding the page size results in an error. I ended up using a tool that works with google drive called draw.io to assist with this. Also, I'd like to note that I had all of the images in a particular order that I was importing them, but now I see that I should have just created them with the file path instead of selecting them every time. Some of the functions I used to make this collage method are listed above. For those that aren't, they're listed below the makeCollage method.
def makeCollage():
  collage =  makeEmptyPicture(1260, 900) # 5x7
  pic1 = makePicture(pickAFile())
  pic2 = makePicture(pickAFile())
  pic3 = makePicture(pickAFile())
  pic4 = makePicture(pickAFile())
  pic5 = makePicture(pickAFile())
  pic6 = makePicture(pickAFile())
  pic7 = makePicture(pickAFile())
  pic8 = makePicture(pickAFile())
  pic9 = makePicture(pickAFile())
  pic3Rotated = rotatePic(pic3)
  pic6Rotated = rotatePic(pic6)
  pic7Resized = shrink(shrink(pic7))
  pic8Resized = shrink(shrink(shrink(pic8)))
  pyCopy(pic6Rotated, collage, 672, 507)
  pyCopy(roseColoredGlasses(pic5), collage, 886, 0)
  pyCopy(makeNegative(pic9), collage, 0, 0)
  pyCopy(pic2, collage, 50, 204)
  pyCopy(moreRed(pic7Resized, 50), collage, 0, 578)
  pyCopy(quadrupleMirror(pic1), collage, 418, 480)
  pyCopy(noBlue(pic3Rotated), collage, 255, 0)
  pyCopy(betterBnW(pic8Resized), collage, 693, 90)
  pyCopy(leftToRightMirror(pic4), collage, 325, 290)
  return collage

def moreRed(image, amount):
  incPercent = 1 + amount * 0.01
  pixels = getPixels(image)
  for pixel in pixels:
    redAmount = getRed(pixel)
    # Set red amount at max red value if it is over the highest value possible
    if redAmount > 255:
      redAmount = 255
    setRed(pixel, redAmount * incPercent)
  return image

def quadrupleMirror(image):
  totalX = getWidth(image) - 1
  totalY = getHeight(image) - 1
  for x in range(0, (totalX/2)):
    for y in range(0, (totalY/2)):
      currentPixel = getPixel(image, x, y)
      rightPixel = getPixel(image, totalX - x, y)
      bottomPixel = getPixel(image, x, totalY - y)
      diagonalPixel = getPixel(image, totalX - x, totalY - y)
      color = getColor(currentPixel)
      setColor(rightPixel, color)
      setColor(bottomPixel, color)
      setColor(diagonalPixel, color)
  return image

def noBlue(image):
  pixels = getPixels(image)
  for pixel in pixels:
    setBlue(pixel, 0)
  return image

def leftToRightMirror(image):
  totalX = getWidth(image) - 1
  totalY = getHeight(image) - 1
  for x in range(0, (totalX/2)):
    for y in range(0, totalY):
      currentPixel = getPixel(image, x, y)
      rightPixel = getPixel(image, totalX - x, y)
      color = getColor(currentPixel)
      setColor(rightPixel, color)
  return image
result:

Red-eye Reduction

This function looks at every red pixel within a certain range and determines if it has a particular amount of red in the pixel. If it does, the color of the pixel is changed to the desired color indicated by the parameter color (in this case, black).
def eyeCorrection(color):
  image = makePicture('/Users/brittanymazza/Desktop/redeye.jpg')
  for x in range (0, getWidth(image)-1):
    for y in range (0, getHeight(image)-1):
      currentPixel = getPixel(image, x, y)
      currentColor = getColor(currentPixel)
      if (x > 155 and x < 295) and (y > 160 and y < 215):
        if distance(currentColor, red) < 150:
          setColor(currentPixel, color)
  return image
original:
result:

Color Art-i-fy

I find this method embarrassing to put up, but it does what it should. Ideally the if/else statements would be extracted out into another method because they all do the same thing. The logic behind this was easy, but the outcome of my method is not visually appealing. What happens with this one is that the red, blue, and green values for each pixel are set to a particular value if they're within a certain range. Value from 0-63 is set to 31, 64-127 is set to 95, 128-191 is set to 159, and 192-255 is set to 233.
def artify(image):
  for x in range (0, getWidth(image)-1):
    for y in range (0, getHeight(image)-1):
      currentPixel = getPixel(image, x, y)
      redColor = getRed(currentPixel)
      greenColor = getGreen(currentPixel)
      blueColor = getBlue(currentPixel)
      if (redColor < 64):
        redColor = 31
      elif (redColor > 63 and redColor < 128):
        redColor = 95
      elif (redColor > 127 and redColor < 192):
        redColor = 159
      elif (redColor > 191 and redColor < 256):
        redColor = 223
      if (greenColor < 64):
        greenColor = 31
      elif (greenColor > 63 and greenColor < 128):
        greenColor = 95
      elif (greenColor > 127 and greenColor < 192):
        redColor = 159
      elif (greenColor > 191 and greenColor < 256):
        greenColor = 223
      if (blueColor < 64):
        blueColor = 31
      elif (blueColor > 63 and blueColor < 128):
        blueColor = 95
      elif (blueColor > 127 and blueColor < 192):
        blueColor = 159
      elif (blueColor > 191 and blueColor < 256):
        blueColor = 223
      setRed(currentPixel, redColor)
      setGreen(currentPixel, greenColor)
      setBlue(currentPixel, blueColor)
  return image
original:
result:

Green screen

The green screen method I made, called chromaKey, looks at each pixel in the green screen image, looking for pixels with a particular amount of green. If there's enough green, it replaces the color of the pixel with the matching one from the background image. This method does not work when the background is smaller than the green screen image.
def chromaKey(image, background):
  for x in range (0, getWidth(image)-1):
    for y in range (0, getHeight(image)-1):
      currentPixel = getPixel(image, x, y)
      currentColor = getColor(currentPixel)
      if distance(currentColor, green) < 150.0:
        bgColor = getColor(getPixel(background, x, y))
        setColor(currentPixel, bgColor)
  repaint(image) # This is not necessary
  return image
originals: 
 result:

Week 3 - Lab #7

Home made Thanksgiving

Several functions were added for this card to do specific things, like adding the sunshine layer, grass layer, and turkey. The images were cut out based on colors (white or black), similar to how we did the green screen in earlier labs. They're all called from the original, generateCard4, method. The mediaPath variable was set to the directory where the images are (using setMediaPath method), which is why the full path is not shown.
def generateCard4():
  card = getBlankCard()
  addSunshine(card)
  addGrass(card)
  applyTurkey1(card)
  addQuestionableHappyThanksgivingText(card)
  return card

def getBlankCard():
  # Create 5x7 card
  return makeEmptyPicture(945, 675)

# Add "Happy Thanksgiving?" text to card.
def addQuestionableHappyThanksgivingText(card):
  text = "Happy Thanksgiving?"
  textStyle = makeStyle(serif, bold, 13)
  startX = (getWidth(card)/20)*11
  startY = (getHeight(card)/20)*7
  textColor = makeColor(153, 0, 0)
  addTextWithStyle(card, startX, startY, text, textStyle, textColor)

# Apply the turkey holding the sign to a card.
def applyTurkey1(card):
  turkeyPic = makePicture("turkey1.jpg")
  # Apply to center of card.
  startX = (getWidth(card) - getWidth(turkeyPic))/2
  startY = (getHeight(card) - getHeight(turkeyPic))/2
  for x in range(0, getWidth(turkeyPic)-1):
    for y in range(0, getHeight(turkeyPic)-1):
      turkeyPixel = getPixel(turkeyPic, x, y)
      turkeyPixelColor = getColor(turkeyPixel)
      # Don't copy over white pixels to treat turkey background as if it
      # were transparent.
      if distance(turkeyPixelColor, white) > 0.75:
        cardPixel = getPixel(card, startX + x, startY + y)
        setColor(cardPixel, turkeyPixelColor)

# Add sunshine to card.
def addSunshine(card):
  sunshinePic = makePicture("sunshine.jpg")
  # Apply to top of card.
  for x in range(0, getWidth(sunshinePic)):
    for y in range(0, getHeight(sunshinePic)):
      pixel = getPixel(card, x, y)
      sunshineColor = getColor(getPixel(sunshinePic, x, y))
      setColor(pixel, sunshineColor)

# Add grass to card.
def addGrass(card):
  grassPic = makePicture("grass.png")
  # Apply to base of card.
  startY = getHeight(card) - getHeight(grassPic)
  for x in range(0, getWidth(grassPic)):
    for y in range(0, getHeight(grassPic)):
      grassColor = getColor(getPixel(grassPic, x, y))
      # Only color if grass image doesn't look black, which is how 
      # getColor interprets the transparency.
      if distance(grassColor, black) > 0.25:
        setColor(getPixel(card, x, startY + y), grassColor)

originals:
result:

Week 3 - Image Portfolio Assignment

Line drawing

The line drawing method looks at each pixel in an image and compares it to the pixel to the right of it as well as the picture below it and takes into consideration the luminance difference between the two pixels. Depending on the result and the contrast variance variable passed it, it either sets the pixel to black or white. This creates a "line drawing" appearance. In the examples below I've used a contrast variance variable of 3. When we're at a pixel at the very right edge, we only consider the pixel below it. Likewise, when we're at a pixel at the very bottom edge, we only consider the pixel to the right of it. However, we can't compare the pixel to anything once we get to the ver right, bottom corner, so I decided to just look at if it's closer to black or white. I've split the method into 5 separate methods for better readability and reusability.
def lineDrawing(image, contrast):
  image = BnW(image)
  maxWidth = getWidth(image)-1
  maxHeight = getHeight(image)-1
  isBlack = false
  for x in range (0, maxWidth):
    for y in range (0, maxHeight):
      pixel = getPixel(image, x, y)
      isBlack = shouldBeBlack(pixel, x, y, maxWidth, maxHeight, contrast)
      setBlackOrWhite(pixel, isBlack)
  return image

# Return true/false indicating if the pixel should be black
def shouldBeBlack(pixel, x, y, maxWidth, maxHeight, contrast):
  if x == maxWidth and y == maxHeight:
    return getLuminance(pixel) > (255 / 2)
  elif x == maxWidth:
    return isSignificantDiff(pixel, getPixel(image, x, y+1), contrast)
  elif y == maxHeight:
    return isSignificantDiff(pixel, getPixel(image, x+1, y), contrast)
  else:
    isSigRight = isSignificantDiff(pixel, getPixel(image, x+1, y), contrast)
    isSigDown = isSignificantDiff(pixel, getPixel(image, x, y+1), contrast)
    return isSigRight and isSigDown
  
# Return true/false indicating if there's a significant difference
def isSignificantDiff(pixel, comparePixel, contrast):
  diff = abs(getLuminance(pixel) - getLuminance(comparePixel))
  return diff > contrast

# Return the luminance value (0-255) of a pixel
def getLuminance(pixel):
  return (getRed(pixel) + getBlue(pixel) + getGreen(pixel)) / 3
  
# Set the pixel to black or white depending on the isBlack value passed in
def setBlackOrWhite(pixel, isBlack):
  if isBlack:
    setColor(pixel, black)
  else:
    setColor(pixel, white)
original: 
 result:


Monday, November 9, 2015

CST 205 Week 2

Week 2

This week we learned about a lot of things, including if statements, conditional operators, and adding items to images using method written in JES. This week has felt a bit more confusing because the language in the assignments doesn't appear to be geared toward the online class. The descriptions say to pair up and call the teacher over for help, but it doesn't work too well like that online. I think it's caused a bit of confusion. Also, the version of JES I'm using is very outdated (using legacy versions of Java and Python). I get the feeling that there are students struggling with the environment itself or trying to use features that don't exist in those languages yet (append for strings in Python). Hopefully next week I'll have time to try out the latest version. Aside from the technical issues, we have learned a lot this week. I'm surprised that we're already doing nested for loops and if/else statements in the second week of an introductory programming class, but I'm glad we're moving quickly so we can get to the interesting stuff quicker. This week we took an image that had a green screen background and replaced it with another background. Replacing a green screen background in an image was fun, though a bit frustrating because you had to make sure your background was larger than the image with the green screen.

Assigned Reading

Angela Lee Duckworth: The Key to Success Grit (TED Talk)

Angela explains that the largest key to perseverance and success (in this case regarding education) is having grit, which she defines as being strong, having a good work ethic, and having long-term goals. Students without grit tend to drop out more or stop short of their goals. She briefly mentions the idea of having a growth mindset and how that has been the only factor that would help students gain grit. It caught my attention because Dev Bootcamp covered a lot of the ideas behind having a growth mindset. It would have been nice if she explained it more because I believe it would have been beneficial for some people to hear about.

Twilight of Lecture by Craig Lambert (Harvard Magazine)

I whole-heartedly agree with this article. Personally, I learn so much more when I'm able to "play" with and idea. Having someone to talk to and engage in a discussion with significantly increases the amount I'll take in and helps me to get my ideas settled. I'm currently enrolled in another course (Data Structures) that has a three hour lecture and the professor lectures the entire time. Unfortunately, I don't think any students are learning anything from it. I can see how beneficial it would be to flip the learning style of that course. However, I can also see how students would be upset with the change. Students are so used to the standard process of sitting silently, listening, and taking notes that they have a difficult time seeing or understanding how a different style (where you listen to the lecture before class and discuss with peers in class) would be beneficial.

Learning to Think Outside the Box by Laura Pappano (New York Times)

This article is all about the importance of being creative. Being creative can be beneficial in regards to computer science because we're always trying to come up with the best solution to a problem. When you can looks at several different ways (some much better than others) to solve a problem, you can usually find a more ideal one in the bunch. Creativity is very helpful when it comes to problem solving this way, too.