Head in the Cloud, Feet on the Ground

The Cloud.  I can’t decide if the next generation will know infinitely more about how it works than the previous, or if the functionality will be so abstracted away from the user that many live their lives thinking of it as a literal cloud.

As Quentin Hardy notes, use of the cloud has become ubiquitous, and many corporations and individuals depend on the use of cloud computing for much of their daily operation.  It is such a massive portion of the economy and so it is dreadfully important to ensure that it continues to be an accessible option.

Personally, I’m currently working on designing a home security system with an internet of things (IoT) protocol called MQTT connecting all of the devices, and the  MQTT server is a cloud based server.  The user of the home security system can access all of the cameras in their home, unlock any doors that are connected, and eventually would be able to control anything else in their home which they wanted to connect to their phone.  This example shows two of what I think are the most important things to consider when discussing the cloud: convenience and security.

Convenience.  Whether or not you think that excess convenience will eventually lead to us all ending up like the fat people in Wall-E, it is undeniable that most of us really enjoy the benefits that cloud computing have given us.  Remember when getting a new phone used to mean having to post on Facebook asking everyone to please send you a text so you could get their number?  You probably forgot that used to happen, because your contacts are likely stored in the cloud.  If you have ever streamed a live event, you probably also were enjoying the benefits granted by the cloud.  In my personal example, the user is enabled to control anything in their house they could reasonably want to control from their phone, no matter where in the world they are (so long as they have access to the internet).  I have saved all of my most important documents to Dropbox, so I can access them from any device, anywhere, at any time.  This convenience is phenomenal, but it does come with a price.

Security.  As this article on the ACM website explains, data confidentiality is extremely important, and requires a lot of trust from the user.  There are multiple layers of security which must be ensured for successful cloud computing, but at the end of the day the user has to trust on faith alone that their data is safe from any threats internal to the cloud service provider.  The authors of the article explain that every layer of security is quite capable of being sufficiently secured, but this may or may not convince the end user.  Perhaps the convenience that accompanies use of the product coupled with a small enough measure of doubt is what prompts the user to adoption.  However, with the “fail fast” model in Silicon Valley, and the rise of the “minimum viable product,” it is easy for many companies to develop airtight security measures as one of the last add-ons, especially if they are a younger company with little expertise.

To quote Uncle Ben: “With great power comes great responsibility.”  Cloud-based service providers are a perfect example, where they have great power over gaining users because of the wonderful benefits they often provide, but they must exercise extreme responsibility to ensure that the use of the cloud continues to be ubiquitous, and is not endangered by poor security measures.

 

Government Backdoors and Data Privacy

Data privacy and security is becoming an overwhelmingly important problem in this digital age.  Target, a major retailer, had credit card data stolen from 40 million accounts in 2013.  Now, in 2016, the government is asking Apple to open a backdoor to their phones so the FBI can access a terrorist’s cell phone.  So without a doubt, it’s an important issue.

Is it just to ask Apple to make their platform intrinsically less secure?  Would that open up Apple users to a credit card hack on the order of Target’s crisis?

Some argue that the firmware the FBI is asking for would only be for use by the US Government.  Which is a fair point to make.  However, the mere existence of such a tool would make the software platform less secure in my opinion.

James Comey, the Director of the FBI, had a lot to say on the topic.  He agrees that encryption is important, stating: “The development and robust adoption of strong encryption is a key tool to secure commerce and trade, safeguard private information, promote free expression and association, and strengthen cyber security.”  He also says that he thinks it’s important to safeguard privacy for the American people, regardless of the communication medium.  However, he ultimately thinks that there should be a third party judge who should determine when those rights to privacy have been given up by the party in question.  Of course, the shooting in San Bernardino qualifies.

I have no issue with the FBI asking for access to the shooter’s phone.  What I do have an issue with, is forcing Apple to develop software to invalidate the security of their users and their products.

Three writers from The Guardian also covered the story.  They quote Julian Sanchez, a surveillance law expert at the libertarian-leaning Cato Institute in Washington, who stated: “The law operates on precedent, so the fundamental question here isn’t whether the FBI gets access to this particular phone, it’s whether a catch-all law from 1789 can be used to effectively conscript technology companies into producing hacking tools and spyware for the government.”

I think it is important to clarify that this is not a fight over just one event.  This is a fight over setting a precedent, as Sanchez noted, and a fight over creating tools with extreme power for misuse.

One of the most abhorrent circumstances surrounding this event is that there is little legislation that was written with these kinds of technologies in mind.  We are daily racing to create better encryption methods, and we are racing against very intelligent people working to break down the latest release of that encryption.  The legal system can’t keep up, and the people in power to legislate are hardly educated on the technologies to the extent which they could make a properly informed decision.

So if we can’t answer very specific questions regarding encryption methodologies and requested backdoors, can we turn somewhere for more general insight into previous laws, that we might attempt to infer the spirit of the law?  Perhaps we should turn to the Supreme Court.

As the late Justice Antonin Scalia once said, “There is nothing new in the realization that the Constitution sometimes insulates the criminality of a few in order to protect the privacy of us all.”

Diversity in Tech Companies

Diversity is an interesting and important topic, and I’ll be writing about the topic as it presents itself in tech companies in the United States.  We live in a world that hails from a time when cisgender, heterosexual, educated, wealthy, white males were explicitly and implicitly the dominant segment of the population.  There were countless structures put in place that assumed these individuals as the predominant and assumed citizen.  These structures still exist today, both in our legislation and in our minds (not-so-fun fact that I discovered while writing this: my web browser doesn’t even acknowledge that cisgender is a word).

In Silicon Valley, diversity is professed to be at the forefront of recruiting desires, and yet the reality is that there is very little in terms of diversity in tech.

Tech companies are spending hundreds of millions of dollars towards diversity efforts, but it’s simply not enough.  Google released diversity data two years ago, and the results are shocking: in tech globally, 17% of Google employees are women, and in tech in the U.S., 1% of Google employees are Black.  I applaud Google for the movement towards transparency; this is not easy data to report, especially when the numbers are so grim, but it is important for people to see the problem so that we can all be focused on solving it.

So once we’ve admitted that there’s a problem, we need to identify the causes.  To my mind, there are two primary causes, and they are both intangible.

First, and this perhaps pertains more to ethnic diversity, there is a culture in many tech companies that caters to white men.  Vauhini Vara reports the experience of many from diverse backgrounds entering tech companies in Silicon Valley, and they all seemed to be saying the same thing: there is an overwhelming culture of privilege (e.g. “I’ve been coding since I was seven”) that makes tech companies difficult to get into, and taxing to stay in.  Erica Joy writes about her feeling an overwhelming pressure to succumb to the whitewashed masses in a tech company, and didn’t even realize how much happier she could be in a more diverse environment until she switched companies.  There have also been many stories where tech employees from diverse backgrounds have been presumed to be custodial or administrative workers, simply because they didn’t fit the stereotype.  So there are considerable barriers to entering and remaining in a tech company if you do not fit the stereotypical description of who has worked in these companies in the past (read: white males).

Second, and this perhaps pertains more to gender diversity, there is a branding issue that is introduced well before candidates apply for an interview.  Eileen Pollack posits that what really keeps women out of tech is that we describe and think about science, computer science, and programming in images and terms that are classically associated with males.  For example, we tend to think that computer nerds like Star Wars.  There is some support for this, as women were joining the computer science field at the same rate as they were joining medicine, law, and physics… until the personal computer came into play – which is presumably when we started to fabricate these stereotypes of what it meant to be a computer geek.

So in order to fix these problems, we need to un-whitewash and de-gender technology: both as an intellectual pursuit and as a corporate endeavor.  Harvey Mudd found success in increasing the percentage of women computer science students by re-branding the degree to more female-friendly.  Code 2040 is providing tech fellowships only for Hispanic and Black applicants.  These are both viable, sustainable solutions to a problem that has been plaguing the technology sector for some time.

The structures that provide access to the American Dream were built by white males, for white males.  I’m glad that we are starting to tackle this tremendously difficult problem as a nation, but we’ve still got a long way to go.

 

Startups and Work-Life Balance

Startups have an exceptionally strong appeal, a unique ethos.  However, joining a startup is not for everyone.

Some see starting or joining a startup as the best way to attain fabulous wealth, have access to really important problems, and learn a ton at a really quick pace.  However, some warn that startups are risky, naming lack of job security and lack of work-life balance as undesirable qualities intrinsic to the startup employee’s life.

Perks are often part of the appeal of startup culture.  However, the more skeptical have implied that perks are a means of keeping employees at work.  This makes sense, as making work a great place to be increases the odds that someone will stay there longer.  However, I feel that these perks are not causative of people spending a lot of time at work, but rather they allow people to be happier while they are spending a lot of time at work.  A lot of people working in startups will be extremely passionate about the mission of the company, and will likely spend a lot of time at work as a result.  These perks may have the consequence of people having little life outside of work, but I think they support those who will be committing their life to the company regardless.

Perks at a startup versus work-life balance at many larger companies are one of the personal decisions that everyone must make for themselves.  As Ross Williamson so cleverly put it: “only you can know what’s best for you.”  I think that this is important to remember when there is a lot of hype around startups, when many people would be much happier joining an MNC and climbing the corporate ladder.

As for me, I’ve chosen to join a company with a definite startup culture.  Palantir is a privately owned tech company, replete with all the perks that some would claim are a means of begging me to never leave work.  So I know that maintaining work-life balance is going to be a challenge, and avoiding burnout is important.

Andrew Dumont nicely summarizes what are a few commonly touted ways of avoiding burnout.  I have already started attempting to avoid burnout during my last semester here at Notre Dame, and I’ve gotta say that it’s been tough.  I’ve been attempting to follow all the best practices of avoiding burnout – working out, pleasure reading, taking time for myself / leisure / play, and eating healthy – and I still struggle to motivate myself to continue my academic focus.

Marissa Meyer asserts that people “can work arbitrarily hard for an arbitrary amount of time, but they will become resentful if work makes them miss things that are really important to them.”  I think this perfectly describes my attitude at the moment.  I kind of feel like I’m in purgatory, waiting to pay my dues until I can move on to greener pastures.  I somewhat resent being held in class when I could be doing great work at Palantir (and getting paid for it).  I’m trying to soak up my last semester with my friends, but spending time in class isn’t stimulating like it once was.

So when I go to work next year, I’ll continue working out, reading, and eating healthily.  I just have to trust that I’ll find my work more stimulating than my current academic pursuits.

Why Millennials Change Jobs and How Tech Companies are Responding

Managers from the Baby Boomer generation (and those from Gen X who have similarly professional preferences) are likely to view millennials as disloyal, pampered, and self-important.  They’re not wrong.

At least, they’re not wrong in their understanding of employment and millennials’ preferences in the workplace. This comparison of generational expectations in the workplace asserts that baby boomers expect hard work, sacrifice, and teamwork. Conversely, I feel that millennials (especially those in tech) expect to be given difficult problems to cut their teeth on, resources to overcome obstacles in solving the problem, and enough autonomy to solve it their way.  To be fair, it is somewhat presumptuous to expect that you can add as much value as someone twenty times your senior, which explains why boomers might view millennials as self-important.  But that’s what tech promises: meritocracy.

So even if companies promise these opportunities to millennials, why do they change companies so often?  The short answer is that they are incentivized to do so.  In fact, this Forbes article explains how employees who stay in companies longer than two years earn 50% less than their peers who jump ship.  This is because the typical salary raise for an employee who stays at a company is 3%, versus 10% for those who are changing jobs.  Over a career, this 7% difference adds up, and someone who is on their 5th job would be earning significantly more than someone who is not.  I played around with this a bit, and came up with the following table detailing the difference:

Capture
Salary Comparison for Changing Jobs

The years marked with an asterisk are those years where there was a 7% difference (10% versus 3%) in pay raise.  So over only 10 years, the occasional company change resulted in a 40% difference in salary (to say nothing of the aggregated lost wage difference).

When faced with this kind of data, it’s pretty hard to blame millennials for being disloyal.

Which brings us to how tech companies are responding, and subsequently why millennials can seem pampered (hint: it’s because we are).

I will use myself as a case study: this summer I will be joining Palantir Technologies in their London office. In my contract and Palantir’s website, the perks that are becoming increasingly commonplace abound: no set hours, media and gaming rooms, free food, etc. These perks make the office a great place to work, but that’s not all companies are doing to retain talent.

Stock options are a great way to align an individual’s incentives with company performance. The more skeptical have argued that the drawback of this alignment is that stock options make it hard to leave a company.  As Julie Evans describes in her article detailing things you should know about stock options, there is often a vesting period where the employee does not actually receive their stock options in full until several years after beginning employment (see the article for more details).  While I understand that this is a potential drawback, I have a serious bias in favor of anything that could be considered an alignment of incentives, so I support the granting of stock options.  In my mind, if you care enough about the problems a company is trying to solve, you will want to spend at least a few years there anyway.

So at the end of the day, every employee has to choose for themselves: chase a higher salary by changing jobs every couple years, or be satisfied with being well fed while you tackle challenging work.  This is a decision that employees must make every year: to stay or not to stay.

And the subtext underlying that presumption is why boomers think that millennials are disloyal, pampered, and self-important: we are, we know it, and we love it.

 

What is a hacker?

We’ve all seen the cliché image of a hacker so prevalent in the non-technical community: some pale teenager with glasses bent over a computer screen in the dark.  While this isn’t inaccurate in many cases, it is far from all-encompassing.

A hacker isn’t someone whose life is spent in deepest reaches of the internet at a computer; a hacker is someone who builds great things with a computer.

Before reading how the computer science community portrays the typical hacker, I was fairly confident that I would not identify very strongly with their portrayal. I was wrong. In fact, I identify strongly with this portrait of J. Random Hacker.

More specifically, hackers are makers. In his essay Hackers and Painters, Paul Graham writes that hackers and painters are more similar than you might think.  At their core, the goal of both professionals is to make good things.  This is related to the argument of whether computer science is mostly mathematics, engineering, science, or art.  Graham pushes back against the idea of consolidating all of computer science into one field for this very reason: there are too many fields represented within computer science.  For him, hackers would occupy the portion of computer science most similar to art, and a hacker is a maker more than anything else.

I like this perspective for characterizing a hacker because it is inclusive, instructive, and freeing. Focusing on building beautiful things frees a hacker from worrying about things unrelated to the quality of their product. Anyone can be a hacker; they just have to go out and build something they care about.

Others have written about how best to use this maker ability – implying that hackers have a moral imperative to use their talents. Sean Parker gives some tips for how to use hacking ability for philanthropic causes in his Philanthropy for Hackers. While one of his tips is to focus on ‘hackable’ problems (i.e. problems where developing software can genuinely resolve the issue), he seems very optimistic about the good that will result from hacker philanthropy, noting the fervor with which hackers are throwing themselves at tough and important problems.  For me, this too is what it means to be a hacker: to make the world a better place by leveraging relevant skills. This is imperative for any profession, and I think that hackers should aspire to be professional, even if they will never dress themselves in ‘business professional’ attire.

 

Of course, there are hackers whose primary philanthropy is through the donation of wealth accumulated via successful technical ventures.  Most notably, Bill Gates and Mark Zuckerberg have donated most of their wealth in one form or another.  Michael Massing weighs the good and the bad of this form of philanthropy, but I ultimately support it.  While throwing money at problems is not nearly as effective as attacking the problem head-on, donating to organizations attempting to solve problems that are not ‘hackable’ is the best way that these tech titans can address these problems. This does not make these individuals any less hackers. They built beautiful companies, and that turned out to be a great investment of their time and talents. Not every hacker has to build software that directly solves a philanthropic problem, but I think every hacker has to discern for themselves how best to employ their talents.

So what is a hacker? A hacker is someone who wants to change the world for the better, and is more than capable of doing it by building beautiful things.

Oh, and a computer should probably be involved.

Why study Ethics in the context of Computer Science and Engineering?

Ethics is important,  especially if you work in technology.

The study of ethics is sometimes regarded as common sense.  While some argue it is common sense that studying ethics is worthwhile, others might argue that what is ethically permissible is common sense.  I tend to agree with the former.

There are plenty of examples where two ethical paradigms disagree (Kant’s duty ethics and Mill’s utilitarian ethics famously tend to disagree), and so it is important for one to explore the field of ethics at a deeply personal level.

So if ethics is important, why is it especially important in the context of computer science? One of the most compelling reasons is that, according to Marc Andreessen software is eating the world. In his words, ‘we are in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy’.  So software companies will increasingly control larger percentages of the global economy, and as Uncle Ben famously says, ‘with great power comes great responsibility’.

Those who work in software companies are implicitly or explicitly granted great power, and everyone in the field must grapple with some of the most difficult ethical problems today as a result. I want specifically to discuss quality, data privacy, and automation.

Being ethical in ensuring quality is uniquely important in computer science. It is impossible to engineer perfection, and there are always trade-offs, so to what extent must one ensure quality in software, and how does one define quality? For contrast, in consumer packaged goods, QA is often  in charge of ensuring that the product ships with the marketed specifications. When I worked at Procter & Gamble, our QA folks had to make sure that we had the right sheet count on our paper towel rolls, because we claimed a certain number of sheets on our packaging. In computer science, the stakes are often higher, less well defined, and less predictable. I say the stakes are often higher because if a software program fails, any number of catastrophic events can take place: nuclear meltdown, data security breach, etc. Not only are the failures more severe, but the client/user is often unaware of how to specify the quality of the product.  While someone knows that their paper towels need to be absorbent, they might not know that they need the bank information they submit online to be encrypted.  Moreover, due to the ubiquitous nature of software, it is often difficult to determine every potential use for software you develop – leading to poor quality usage in unintended applications. So how thorough must a computer scientist be in ensuring the quality of her work in order to be ethical?

Big data capability is expanding, allowing us to do more than ever before: we can track and predict everything from terrorist activity to disease outbreaks with new-found success. In his State of the Union address, President Obama put Joe Biden in charge of taking on cancer through leveraging big data.  However, all of this requires the use of personal information.  Data mining, big data analysis, and related fields all have to grapple with the ethical issues surrounding consent, notification, and security for personal information. Facebook and Amazon both notoriously use personal information for targeted marketing: is that wrong?  Both are providing a great service to the user, but at what cost? The cost of relinquishing personal privacy?  So what, the user consented to Facebook’s data privacy policy some say.  But I’m not so sure that any twenty year old in the western world can elect not to have a Facebook account without suffering some social setbacks. Again, having all of this power in personal information comes with great responsibility, and every Facebook programmer has to make the call for herself as to whether they believe this is ethical.

Programmers call it ‘automation’, but laborers call it ‘unemployment’. From one perspective, a 5 percent cut in costs and a 10 percent increase in output sounds like a no-brainer.  From the other perspective, 3 months without work sounds pretty grim. You could argue that the cost savings can be passed on to the customer. You could also argue that without the automation, the company might lose business to competitors who did automate, costing everyone in the company their job. That might not make it any easier to tell your employee that they should probably update their resume. While my own views on automation tend to be positive, it’s another example where the ethical decision isn’t always black and white.

So software is eating the world, and software companies are at the helm. The world is becoming increasingly digital, and there are a number of unique ethical issues facing computer science. With the power placed in the hands of software companies, it is exceptionally important that technologists concern themselves with ethics.

 

Zach Imholte: An Introduction (CSE40175)

I am a senior electrical engineer at the University of Notre Dame.  I was born in Cincinnati, Ohio and have lived in the Midwest of the United States for the entirety of my life.  That is, apart from the last year I spent studying abroad at the University of Oxford in England.  I also plan to return to the UK to work for Palantir Technologies as a Deployment Strategist in London.

My interests and hobbies are many and varied.  I tend to cycle obsessively through hobbies, investing just enough time to develop some level of mastery before moving on.  Some good examples are my waning interest in chess and my waxing interest in hobbyist electronics.

I always thought I was pretty good at chess growing up (so do many who perceive themselves as mildly intelligent), but it wasn’t until I lost to the president of my high school’s chess club that I realized how wrong I was.  So I started studying. Every night, I would finish my homework around 10pm, then study chess for the next 3ish hours before bed.  I did this for a couple months, until I eventually was able to consistently beat that same chess club president.  Since graduating high school, I’ve barely kept up with it.  This is a pretty common theme in my life as far as my hobbies go: I decide I want to be better at something, then I doggedly pursue self-betterment in that area, and eventually I move on to something else.

I decided to study electrical engineering because I wanted to learn about a field that was very hard to understand on my own.  I honestly lucked into how much I enjoyed electrical engineering, but I ultimately chose not to accept an electrical engineering job.  After multiple electrical engineering internships, I decided it wasn’t a strong fit.

I am hoping to leave this computer science ethics course with a deeper understanding of the most important arguments to be made on both sides of relevant professional issues today – I am somewhat of an armchair philosopher at heart, and I think this class will be professionally relevant for my work at Palantir.