Categories
Uncategorized

Why Increased Voter Turnout in 2020 Could Mean Woe

Daniel Freer, October 15, 2020

In the leadup to the 2020 US election, I have started to become interested in voter turnout, one aspect of the election that hasn’t been overly sensationalized by the media. Ironically, the more I dug, the more I began to realize that perhaps it should be sensationalized, as high voter turnout can have many underlying implications, such as severe idealogical division and fear about the future of the country.

Let’s take a  historical perspective. There are a few things to note in the chart below from the United States Election Project (where Wikipedia gets its voter turnout data), which plots voter turnout in the US throughout its entire history. The most relevant note to predicting turnout in the 2020 election is the sharp uptick in voter turnout at the end of thte graph, in the midterm election of 2018. Voter turnout in 2018 increased to 50.0% as compared to 36.7% in 2014, a difference of 13.3 percentage points. While there have been substantial changes in midterm turnout at various points throughout history, this is the first 10-percent swing in turnout since the drop between 1938 and 1942 (from 46.6 to 33.9%), in the throes of World War 2. However, it was the first 10-percent increase in midterm turnout since… wait for it… 1796.

Voter Turnout Rates 1789-2018

You could claim this uptick is a fluke, and assume that things will go back to normal. But recent reports have suggeseted that the 2020 election will also have increased turnout, with voter enthusiasm off the charts. Even before Covid, polling from Fox News showed that 82% of respondents were at least very interested in the election, which has increased to 89% in October. Of course, it is highly unlikely that 89% of the voting-eligible population actually comes out to vote, but this poll, and other polls like it, still indicate that voter enthusiasm in 2020 is on the order of 20 or 30 percentage points higher than it was in… say… the year 2000. Based on this, 2020 could be the first year since 1900 (before women had the right to vote, I might add) in which voter turnout reaches above 70% in the United States. And because previous voters are more likely to vote again, this voting bump is unlikely to fade quickly, which could lead to a long sustained period of high voter turnout.

And this makes sense. Tensions in the United States are incredibly high right now, with people on the right claiming a Biden administration will turn the US into a communist country, and people on the left saying Trump wants to preside over a fascist white nationalist regime like Hitler. People are dying from Covid, and protesting in the streets. It’s only natural that when people feel so strongly about something, they want to act.

From the same chart above, you can also see some small bumps in voter turnout that roughly correspond to the Great Depression (1929-1940) and the Civil Rights movement (1952-1968), two periods of great struggle for the United States. But the “bump” in voter turnout during these times was only about 10 percent, and overall turnout never reached the levels achieved in the 2018 midterms, and the expected turnout in 2020.

Therefore, if voting in 2020 reaches above 70%, the best historical comparison may become the years between 1840 and 1900. These are the years in which voter turnout was highest in United States history, a long sustained period of voter turnout higher than 70%, and at times reaching above 80%.

So what was happening between 1840 and 1900? The first thing to come to mind is, of course, the Civil War, which started in 1861 after the election of Abraham Lincoln. But even in 1840, the abolitionist movement was in full swing, and the country was greatly divided by idealogy. From 1840 to 1860, no single president held office for longer than four years, as swing voters oscillated between the Democrats and the Whigs until the demise of the Whigs after the 1852 election. With the Whigs gone, several new political parties began to gain prominence. Eventually, the Republicans emerged from the smoke with Lincoln’s win in 1860 and held onto presidential power until 1884. Shortly thereafter, voter enthusiasm began to decline once again.

Then what does this mean for 2020, if history were to repeat itself? The country would go through a long period of uncertainty, with shifts in power every election cycle. Eventually, one politcal party would be destroyed, and a new party would emerge that would define the country for years to come. There may be a civil war, in which those who reject the idealogy of the new political party believe the only way to stand up for themselves is to spill blood. This may seem farfetched, but based on what I’ve been hearing and seeing, I’m beginning to think that such things might just happen.

Now of course, all of this is just speculation. Turnout could end up being down, especially with the pandemic looming. The jump seen in voter turnout in 2018 was unprecedented in modern times, and we can’t truly compare the 1840 US to the 2020 US. There were only 26 states in 1840, and only white men could vote. But no matter what the result of the 2020 election is, the idealogical fighting isn’t over. In fact, it’s only the beginning.

As a closing thought, I would like to reach out to my fellow Americans. A period of great and unprecedented change has been thrust upon us, whether we like it or not. If you care about the future of the United States, then vote. I hope that everybody votes, and I’m glad that so many people now believe that their vote matters. It does. But an increase in voter turnout doesn’t necessarily indicate that certainty and peace are coming. In fact, history has shown us that it might indicate the opposite. Be prepared.

Categories
Uncategorized

Racial Progress vs. Law and Order: The United States’ Internal Struggle

Daniel Freer, July 17, 2020

In 1960, the Democratic party had been devastated in two consecutive elections, losing to a rather uninspiring President Eisenhower. After Franklin Roosevelt’s dominant leadership for the 13 years between 1932 and his death in April of 1945, the Democrats tried to ride out his vision for as long as possible, with the reelection of Truman in 1948. But Truman hadn’t been nearly as inspiring as his predecessor, and so his appeal faded as he made several unpopular mistakes related to the global spread of Communism. Luckily their Republican opponents didn’t have a clear vision for the future at the time either.

John F. Kennedy was by no means the most influential Civil Rights leader of the time (see Martin Luther King Jr., or Malcolm X), but he began to support their message even before he was elected, winning 70 percent of the black vote in his electoral win in 1960. That being said, John F. Kennedy’s election in 1960 was very narrow, and his win was certainly not because of his Civil Rights vision. And yet, JFK ultimately became the harbinger of a new Democratic Party: one which focused on the progression of Civil Rights.

The support of black Americans by the president, and the Democratic party in turn, created anger among many who saw the United States as a white country that should stay that way. This led to, among other things, a realignment of voters, particularly in the south, who saw racial integration as an affront to their way of life. Those opposed to Civil Rights began to vote against Democrats, no matter who they were. Even in the wake of JFK’s assassination in 1963, voters in southern states switched their support to the Republican party, largely as a protest vote against Civil Rights.

However, as the Civil Rights movement heated up, it began to create concern among average white Americans across the country. In 1968, the high-profile assassinations of Martin Luther King Jr. and Robert Kennedy (who had been a Democratic candidate for president) exacerbated these concerns. Richard Nixon chose to listen to these concerned white Americans, and ran his campaign on a promise to restore “Law and Order” to the country. Though southern states initially rejected Nixon in favor of George Wallace, the governor of Alabama, they came around to his message in 1972, and “Law and Order” appeared to have won as a prevailing message of the times.

“Law and Order” was truly a brilliant message. It gave the government reason to quash protests, and to deny racial progress, in the name of peace and prosperity. It ensured the support of both southerners who opposed racial equality, and moderate whites who would rather stop the boat from rocking than fight for the rights of somebody else. The message was so compelling, in fact, that Democrats even began to adopt it after embarrassing electoral defeats throughout the 1970s and 1980s, excluding Jimmy Carter’s win in 1976 (which was largely a reaction to the Watergate Scandal). Bill Clinton’s now infamous 1994 crime bill, for example, was an attempt to show that Democrats could maintain “Law and Order” just as well as Republicans.

With Barack Obama’s election in 2008, everything changed once again. The Democratic Party began to return to their initial focus as laid out in the 1960s: one of equality. Obama had inherited the legacy of JFK, and Martin Luther King Jr., men who had become influential when he was still learning to walk. And since then a new Civil Rights movement started to brew, with renewed protests, this time about overpolicing of communities of color and the destruction of monuments perceived as racist.

Clearly, the fight between progress and law and order is still not over. Perhaps it never will be. Any push for change causes unexpected results, and creates a backlash against it. Even when it isn’t about racial equality, it will be about something else. And as history has shown, the fight is not one between political parties, but can also be within them, and within each one of us as individuals. There can be no progress without turmoil, and there can be no order with constant change.

However, one thing has become clear in the last 4 years: Donald Trump and the Republican Party that supports him do not believe in the “Law and Order” that was a staple of past Republican administrations. Trump has constantly skirted the law, if not outright breaking it, and has been supported every step of the way by his colleagues, with few exceptions. The country has never been in less order, mostly due to the failure of Trump’s administration to adequately prepare the country for any unexpected event, as seen by disastrous responses to the Covid-19 pandemic and #BlackLivesMatter protests, among other things.

So if you vote for Donald Trump this year, you aren’t voting for “Law and Order”. And you certainly aren’t voting for progress. A vote for Donald Trump is no more than a vote against JFK’s legacy, and a rejection of the racial equality that should have been achieved years ago.

Categories
Uncategorized

Basic Image Processing: Direct Pixel Modification

While considering and changing individual pixels isn’t the most efficient way of modifying an image, it can certainly help with a person’s understanding of what is truly happening with an image at the pixel level. In this code, we will modify the pixels in such a way to make the image look more “cartoonish”, with solid colors representing all colors within a given range. In this implementation I chose numbers that were suitable for my particular picture, but may not be best for your picture, so I encourage people reading this to change numbers as they see fit, and learn how this changes the image modification.

In the above code, we define a new function called color_pic, taking as input the image that you would like to cartoonify, evaluating each pixel by itself, modifying it, and showing the final image at the end. The first thing we do is copy the image into a separate variable, so that the original values of the image could still be accessed by other parts of the code later. In many cases, this step isn’t necessary, but if you are not constrained for memory it may be worth it to be safe rather than ending up with unexpected values. After this, the for loops begin.

In this code, we make use of 2 “for loops” to consider each pixel in the x and y directions, respectively, which are also the first and second dimensions of the image. The third dimension, again, is to tell which of the three colors (blue, green, or red) you are currently considering. I should note that for loops are generally pretty inefficient, but here they can do a better job of . The for loops are constructed by using the range function, which is native to python, and creates a list of numbers within a range you define. If you type print(range(5)), it will return a list: [0, 1, 2, 3, 4]. If you type print(range(2, 5)), it will return a list: [2, 3, 4]. There are additional options for range, but they aren’t important for the purposes of this exercise. Using the “for loop” essentially just means that it will consider all of the listed cases for a variable. For example, if you type for i in range(5): print(i), the program will print lines listing 0, 1, 2, 3, and 4, as these are the different values that i will represent before the for loop is complete. In our case, we will have i represent all of the numbers between 0 and the height of your input image (which is the first dimension of the image, as can be found by typing my_im.shape[0], or im.shape[0], if you do it within a function. The second dimension of the image (the horizontal direction) can similarly be found by typing my_im.shape[1] or im.shape[1].

So, within the for loops, calling im[i, j] indicates that you are considering a single pixel which is i pixels down from the top of the image and j pixels right from the left of the image. Typing print(im[i, j]) will return 3 values which correspond to the blue, green, and red values that are present in the pixel you are currently considering. Each of these values can additionally be individually called using im[i, j, 0], im[i, j, 1], and im[i, j, 2], respectively. The rest of the code uses these constructions to determine how to modify the image.

Line 38, for example, determines if the currently considered pixel contains B, G, and R values that are all above 200. As the maximum value of each is only 255, this would mean that these pixels are almost completely white (if you’re confused about this, please consult this website again). In line 39, it then makes any pixel meeting this requirement to be exactly white, with maximum values for each of B, G, and R for that pixel. Similarly, line 40 determines if the B value is greater than 165. If so, then it changes this pixel to be a pre-defined color with B, G, and R values of 230, 150, and 20, respectively. Each pixel goes through many similar considerations in lines 38-54, each considering whether it falls within a pre-specified range of B, G, or R values. If the pixel doesn’t meet any of these conditions, it is finally assigned to be a sort of dark grey color with B, G, and R values each equal to 50.

The result of this code (you will notice it is slow compared to the others, due to the inefficiency if the for loop) is shown below. You can imagine how a similar technique (with some tweaking and polishing) could achieve something similar to the previously very popular Obama “HOPE” photo, also shown below:

Categories
Uncategorized

Basic Image Processing: Edge Detection

The next technique that we will discuss is edge detection, which is one of the most simple ways of detecting “features” in the image. When talking about image features, what we really mean is something in the image that can be mathematically defined which distinguishes one part of an image from the rest of the image. The pixel values themselves can be considered features, but it is often helpful to extract different types of features that can allow to generalize about different regions of an image. This method of edge detection, similar to image blurring, moves a kernel throughout an entire image, and highlights the pixels where the values change dramatically when compared to their neighbors. For this technique, we will define our own filter (albeit a common one called the Sobel operator) and apply it to the entire image.

Looking at the functions defined below, you should notice that they are almost exactly the same as the blur_image function we previously defined. The only difference comes in the first line (lines 15-17, and lines 25-27, respectively), where we define the kernel. I should note that while this definition takes up 3 lines in reality, the program treats this as only one line because lines 15 and 16 never properly end the statements that come before it. You can see that the function we use (np.array) has a “([[“ right after the function name, while the end of the line only contains “],”. These first three lines could have similarly been typed out: “hor_kernel = np.array([[1, 0, -1], [2, 0, -2], [1, 0, -1]])”, but I chose to separate them into 3 different lines in order to emphasize that we are, in fact, defining a 2-dimensional 3×3 array to serve as the filter. Notice, as well, that in defining the array, we have called the NumPy library (shortened to np).

A Sobel filter is always oriented with negatives on one side of the filter, positives on the other, and zeros in the middle. Applying any filter to a group of pixels is actually done via a process called convolution, which is demonstrated on the right below. Here, the Sobel filter is being applied to 2 pixels, the orange and the blue. In convolution, you place the center pixel of the filter over the pixel you are currently evaluating, then multiply each overlapping element, and add everything together. This results in a value of -23 for the orange square and -310 for the blue square. However, in reality only the magnitude of these values will be considered, as the edge strength is considered as one thing (the magnitude), while the edge direction determines the positive or negative sign. When additionally using a vertical edge filter (as is defined in our function vert_edge below), the overall edge strength can be calculated, as well as a specific angle, which can be calculated based on the ratio of these two edge values for a particular pixel. It should be noted that detecting the edges of a color image requires running through the 2-dimensional image 3 times, once for each the blue, green, and red values of a given pixel. This results in edge strengths for each color, which can again be combined to form the final edge-detected image.

After typing the above functions into your Image_Process.py file, you can test them in the same way as demonstrated by the blur_image function above. You should see images that look similar to the below pictures. Notice the highlighted areas for the horizontal and vertical filters, respectively. The horizontal kernel results in mostly vertical lines, while the vertical kernel results in horizontal lines, because they are horizontally and vertically comparing pixels, respectively:

We can also write a function to combine these two edge detectors together through addition. This may not be the most effective way of combining them, but it can achieve some interesting results. Note that the only reason that direct addition works is because of the datatype of the number. These numbers are represented as int8, meaning that they are an integer with 8 bits. As each bit is binary, that means that this number can only hold up to 28 = 256 possible numbers, matching the values between 0 and 255 which are used for the GBR values of each pixel. You can think of this by saying that any number higher than 256 (or any multiple of 256) subtracts 256 until the given number fits within these constraints. In the actual computer, it just means that any bit past the 8th bit (or any bits beyond this) isn’t considered, as it hasn’t been allocated in memory. The code and results of this can be seen below:

Categories
Uncategorized

#BlackLivesMatter

Daniel Freer, June 3, 2020

Over the past few days, with the #BlackLivesMatter protests happening in cities around the nation (and the globe), I have started to reconsider some of my previous ideas about freedom. I do care about having the freedom to do and say what I want. But especially in these times, I realize that not all in America do have these freedoms, and it’s time to stop pretending like it.

George Floyd does not have the freedom to do and say what he wants. Neither does Trayvon Martin, or Philando Castile, or numerous other names of black and brown people that have been killed prematurely, without due process, and without even being able to speak or stand up for themselves.

As a person that values freedom, this is a tragedy.

And it isn’t even just the dead people that are denied their basic freedoms. Living, breathing black people began to protest Floyd’s death and were met with armed policemen, teargas, and rubber bullet wounds.

Compare this to the Covid-19 lockdown protests, where gun-wielding people screamed out absurd conspiracy theories about 5G causing a virus, or claiming that the reported deaths from the virus are fake, or unimportant. And they stood there, essentially, for their right to stand there. They stood there for their right to endanger other people in the midst of a pandemic. For these protests, the police were subdued. You could argue that this is because the protesters had guns (so police were not likely to get violent), but if #BlackLivesMatter activists brought guns to their protest, you can be assured the violence would only increase.

Trump, for his part, has vigorously supported one of these groups of protesters, and has sent the military in on the other. And he has certainly fought for his own freedom to Tweet out lies and hate. If you value freedom, Mr. Trump, value it for all people. Not just white people who support you.

But I’m still not done. People may say: what about the vandalism and looting? Aren’t the protesters just committing crimes now?

The simple answer: No.

The more complex answer: Yes, some people are committing “crimes” during the protests. Some of these “crimes” may be committed by real protesters, but in most of these cases, I do not view their actions negatively. For example, in my hometown of Asheville, North Carolina, one of our most notable downtown destinations is the Vance Monument. Protesters vandalized it, spray-painting “Black Lives Matter” on it. At first, I was upset about this, but as I read further I learned that Zebulon Vance, the man whom the monument is named after, was extremely racist. He fought in the Confederacy in order to keep slavery in the south, hoped to prevent black people from participating in government, and signed bills preventing interracial marriage. So is vandalizing this monument illegal? Yes. Is it bad? No. We need to do better, and sometimes you have to break something down in order to build it up again stronger.

However, there are some people who are vandalizing things and looting random stores, not to further the cause, but to enrich themselves. These are crimes. And people that are doing this without putting any thought into the larger goal cannot be considered true protesters. And as a result, the protesters cannot really be blamed for this.

I have a question for any policemen out there. If you are injuring the people in your city, forcing them not to speak and limiting their ability to protest, who are you fighting for? I always thought the job of a policeman is to protect all people in your community. But right now it seems like their job is just to protect the select few that buy their uniforms or pay their salary. Even if you are protecting businesses and their assets, why are you valuing these things above the actual health of the people in your community? You are an individual, and a human, and I think the best way to end these protests is to show your humanity and to connect with the disaffected community. I know being a policeman is not an easy job, and I generally respect the people that have chosen to undertake it as a profession, but I don’t know how you can justify hurting innocent people (even accidentally, much less on purpose) when your job is to do literally the opposite of that.

But rather than preaching empathy, Trump has encouraged more violence on the side of police, and more militarization of the crisis. He has linked all looting and vandalism to the protesters, even though most of this has nothing to do with the movement. And he has not yet come up with a single idea to improve the situation, other than the police showing more strength. For people afraid of overbearing governments and militaristic crack-downs on dissent, look no further than the United States. The main goal of the police and the government right now appears to be the quelling of black voices, or in fact any voice that wants to demilitarize the police. And militarized responses from the police only increase the divide in the community, proving to the police that their weapons give them the strength to control crowds, and proving to the protesters that the police are assholes that don’t care about their plight.

So yes, I support the #BlackLivesMatter protesters. And yes, I believe that the general responses from the police and the president have only hurt the nation further. And in the coming years, if we as a country continue to insist that we are the land of freedom, then I must insist that these freedoms are extended to everyone.

Categories
Uncategorized

Basic Image Processing: Image Blur

For this section, if you haven’t yet set up your Python environment, please follow the previous tutorial. To see the full list of tutorials, see the main AI Tutorial Page.

At the top of your script (Image_Process.py), type the following pieces of code:

The first piece of code on the left (lines 1-3) import the packages that you need. “cv2” refers to OpenCV, while numpy refers to NumPy. Typing “import … as …” instead doesn’t really change anything, but it allows you to use a shorthand notation for a package when you are trying to refer to it in code. In this case, we will use the common shorthand notation for the NumPy library, “np”. Any time you would like to call something from a library or package in python, you simply type the name that you have imported, then a period, then the function, class, or element you wish to access from that library. Such notation can be seen in all of lines 7-10, where you call the OpenCV library by typing “cv2.getGaussianKernel(20,5)” or “cv2.waitKey(0)”.

The next block of code on the left is defining a function. This is achieved using the notation “def name_of_function(parameters):” Defining a function is useful when it is performing an action that you may want to perform many times. Otherwise, it is just useful to make your code look nicer and more readable. In this case, we will are defining a function to blur an image. Therefore, I’ve named the function blur_image, with a single input (im), representing the image that we wish to blur.

Line 7 defines a kernel (which can also just be considered an image filter) that is built into OpenCV. This type of filter is known as a Gaussian, which is commonly used in many fields, but in image processing it is often used to blur images. The two parameters entered into this function (20, 5) indicate the size and intensity of the blur, which will affect the final image output (examples later). Line 8 applies this filter to the entire image, using the image and the kernel as inputs. Line 9 names (1st parameter in the function) and shows the modified image (2nd parameter in the function), and Line 10 ensures that the program doesn’t continue until you press a key on your keyboard. Notice that all of these lines are actually functions that are built in to OpenCV. This is one of the most powerful things about python: there are already many existing functions that do exactly what you need them to. The trick is in knowing how and when to use them.

Below this function, type the code on the right. It first declares the filename of the picture you are hoping to use. In this case, I have used a picture that my wife took of me in Scotland, at the Eilean Donan castle, and so I have named it ‘EileanDonan.jpg’. The quotation marks indicate that this variable is a string, which means that it can be read by a computer as letters (or anything that you can type on a keyboard), but does not hold any numerical value. Notice that these are also present in line 9 on the left side. For example, if you were to type: sum = 2+2; print(sum), then the computer would, correctly, print 4. On the other hand, if you typed: sum = ‘2+2’; print(sum), then the computer would print 2+2, as it reads each of those individual components as nothing more than a letter. Likewise, if I were to type filename = EileanDonan.jpg, then the computer would throw an error, not understanding that I wanted it to use this combination of letters, and instead would be searching for a different type of variable which it wouldn’t be able to find. In your case, you should find a photo that you want to edit, move it to the same folder as Image_Process.py, and type in the name of the file in quotation marks, as demonstrated above.

The next line on the right block (my_im = cv2.imread) reads the information from the image file and converts it into a format that can be handled by the program. In this case, the format is essentially a 3-dimensional array, which has the shape [height x width x 3]. The height and width components of this are dependent on the particular image that you upload, and represent the number of pixels that are in the image in the up and down direction. You can find these numbers by typing print(my_im.shape) after loading the image with this line of code. The 3 refers to the 3 components of color (blue, green, and red) that, when put together, make up the complex colors that we are able to see in the image. And the values in the array represent how much of each color is present for a given pixel. To explore this concept a little more, you can use this tool (not created by me). The next two lines, similar to lines 9 and 10 on the left side, show the image and wait for the user to press a key. Now, run the code by pressing the play button in one of the three places marked with the red boxes (note that the menu bar appears after right clicking your filename at the top of the screen):

Your code should show you your original image, then a blurred image. Below are shown my results for the initial image, a blurred image using (5, 5) (top right) as input parameters to the Gaussian function, then a blurred image using (20, 5) as input (bottom left), then one using (20, 20) as input (bottom right). You can notice how the bottom left picture is much more blurred than the top right because it used a larger kernel to do the blurring, even though the intensity of the blur was the same. Meanwhile, the bottom right picture using the same size kernel as the bottom left, but a much higher intensity, which results in some perceived double vision, as can be seen most noticeably with the stripes and the glasses.

Categories
Uncategorized

AI Tutorial: Python (PyCharm) Setup

I’ve primarily written this section for people who are completely new to Python and programming. If you comfortable with setting up a Python environment, you can skip to Part 2.

For everyone else, first download and install the newest versions of Python and PyCharm. This should be straightforward, but if you have any issues, please consult their websites. I should note that PyCharm is just one of the many IDEs that you could use to facilitate your Python code, but it is the one that I first became accustomed to when I was learning Python, so I’ve chosen it for this tutorial as well.

Once they are both installed, open PyCharm and start a new project. Name it whatever you would like, as long as you can remember it and find it later. I will name mine “Tutorials”. Click File > New, then select Python File. In the “New Python file” box, type the name of the file (in this case, Image_Process). This will create a new file in your project folder. In this case, it is C:\Users\Daniel\PycharmProjects\Tutorials\Image_Process.py.

Now, we need to set up our python environment. Go to File > Settings and you should see the following window, though yours may have less packages listed. First, ensure that your Project Interpreter (shown with a blue box) is correct. This Interpreter indicates the version of Python you are using, as well as the packages that you have installed for a particular environment. I have just chosen the Python 3.8 system interpreter, which includes all of the packages installed locally on the system. Another popular option would be to create a different virtual environment for each project, which ensures that any projects that require different versions of a package, or different versions of Python, can remain separate and will not interfere with one another. To do this, you need to click the “Settings” wheel on the right side of the blue box, select “Add…”, then the “VirtualEnv Environment” option. However, this is not the topic of this tutorial.

Next, we need to actually install the necessary packages within the Project Interpreter. Any time you get a ModuleNotFoundError when running your code, it indicates that you have not installed a package that you are attempting to import to your program, so this is where you will need to come in order to install those packages into your current environment. Even if you have installed this package elsewhere, if it isn’t present in the environment you are currently using, it will not be accepted in the program. To install a package, click the “+” button indicated by the red box, and search for the package. For the ImageProcess tutorial, you’ll need to import “opencv-python” and “numpy” (which may be installed automatically during installation of opencv-python). OpenCV is one of the most popular libraries used for image processing. Originally written in C/C++, it has been adapted to be usable in python. NumPy, on the other hand, is a package for handling numbers and numerical structures such as arrays and matrices.

Once you have the necessary packages installed, we are ready to start coding.

Categories
Uncategorized

A Global Look Back: Phase 1 of Covid-19

Daniel Freer, May 18, 2020

In what appears to be a lull for Covid-19, we are now able to look back on this crisis (or at least the first part of it) with 20/20 vision for the first time to see what went wrong with various healthcare systems, and how these systems can be improved in the future. In this essay, I’m going to assume that the numbers reported by most major media outlets and international organizations are approximately correct. I have also been closely following the numbers on Worldometers.info, though I have recently heard (from CNN, so take it with a grain of salt) the source behind this site is mysterious and unknown. But that is beside the point.

I’m going to attempt to use two metrics to describe a country’s response to the pandemic: 1) infections per capita; and 2) death rate. Infections per capita (i.e. 58 Covid-19 cases for every 1 million people) can generally reflect how well the country did at stopping the virus from spreading. Meanwhile, death rate (as deaths/the number of infected) generally reflects how well equipped hospitals are to handle a lot of patients. Death from Covid-19 (at least from my understanding) has largely occurred due to overwhelmed and under-equipped hospitals that couldn’t keep up with increased and intensifying workload.

Let’s look at some numbers (according to Worldometers.info on May 18, 2020):

 Total cases (active cases)Infections per 1MDeath Rate
United States1,527,664 (1,090,297)4,6195.9%
China82,954 (82)585.2%
South Korea11,065 (898)2162.3%
Japan16,285 (4,388)1294.7%
UK243,695 (N/A)3,59214.2%
Spain277,719 (54,124)5,94010.0%
Italy225,435 (68,315)3,72814.2%
Germany176,651 (14,002)2,1094.6%
France179,569 (90,248)2,75215.7%
Cases, infections per capita, and death rate of some key countries during the Covid-19 pandemic

So, from these numbers we can see that the United States has a similar death rate to China, while South Korea’s appears to be the best of all countries with a significant amount of cases. Germany is the lowest in Europe, though in general Western European death rates are comparatively high. In terms of how far infection spread, however, the United states was on par with Europe, while Asian countries were clearly much more effective in stopping the spread of the virus within their own communities.

Why is this? Because these Asian governments were able to quickly discover who had the virus and efficiently disseminate information to their neighboring people.  A clear message about health and safety (wear masks, wash your hands, avoid contact with others). And the people listened to this message and responsibly followed instructions.

Any increased numbers in infections per capita or in death rate can therefore reflect a country’s failure to do these things. The first point (1), discovering who has the virus, should be the job of the healthcare system. The second (2), the dissemination of clear information to people about how to protect themselves, is the job of the government and the media. The third point (3) is about the people and the culture, whether they listen to authority figures, and whether they act irresponsibly.

1. Discovering who has the virus

A strange and unpredictable disease was first officially reported in Wuhan, China at the end of December. It is known that the virus was circulating before this however, both in China and in other parts of the world. But China was the first to identify it as a threat and report their findings. This may have been because the virus did start in Wuhan, but based on the existing evidence, we can not be 100% sure that this is true, as patient zero has not been found, and likely never will be.

However, imagine for a moment that the United States had no contact with China whatsoever in December and January, and we were completely unaware that there was a disease there. Imagine that a disease started spreading in New York, with the first cases being reported in the middle of March. Imagine that we couldn’t find the direct source of the virus, but attempted to figure it out as several thousand new cases are reported per day. This was the situation in Wuhan, China, which has a similar population to New York City. However, even on this level playing field, if you consider the differences in case numbers between the two states/provinces, the Chinese response was much better, with about 68,000 total cases and 4,500 deaths in Hubei province as compared to 360,000 cases and 28,000 deaths in New York. But in reality, it wasn’t a level playing field. New York had a huge advantage in that they were told about the virus several months before its spike in reported cases, and still failed to stop its spread.

So what is the cause of this failure in the US and Europe? My theory: it is because not enough people went to the doctor.

In China, the first people to notice that a unique and dangerous virus was spreading were doctors. They reported it to their superiors, who told them not to say anything publicly until they had more information. Then, when they had more information, it was disclosed publicly. However, doctors can only notice such a trend if a high percentage of the population actually receives medical attention when they need it. This was not the case in either Europe or the US.

In Europe, I attribute the high amount of infections per capita to their overwhelmed healthcare systems, which can be seen by their high death rates (except in Germany). Because their hospitals didn’t have enough space for the extreme influx of patients, many people who were sick were forced to stay home, and many who needed to see a doctor were turned away. This caused more interaction between, for example, roommates or family, which created additional avenues for the virus to spread. While hospitals in Wuhan were similarly overwhelmed, they quickly built new hospitals to deal with the overflow, and doctors from other parts of the country relocated to help out. Such a response in Europe did not occur, and would likely be impossible.

In the US, however, the healthcare system was not as overwhelmed as Western European ones, as can be surmised from the relatively low death rate. But this means that the United States’ high infectivity rate requires a different explanation. My theory relates to the fact that the US largely has a reactive rather than a proactive healthcare system. Because healthcare in the United States is so expensive, people (especially those who are uninsured) are discouraged from seeing a doctor regularly, or even when a problem does arise. They simply aren’t able to afford it, and their body usually gets better anyway. This leads to increases in spread of the virus for the same reason as in Europe: more sick people are around other people rather than in a hospital. In this case, it would also take doctors longer to notice the appearance of a new virus, as the majority of people with the virus may not even contact a doctor to get it checked.

2. Dissemination of information

The next important task to handle an epidemic is to get relevant and helpful information out to the people, allowing them to spread the message further and protect themselves. While I know China’s government has received a lot of criticism on this front, the fact remains that they stopped the spread of the virus much more efficiently than both European countries and the United States. Other Asian countries were also very successful in warning the public about the dangers of this disease, which largely stopped its spread before it became a massive problem.

The United States’ messaging on coronavirus has been utterly terrible. First, the initial claim by the CDC (and many other American and European media sources) that masks are not effective, and then a complete reversal of this opinion several months later. The constant fear-mongering of the media and the blaming of the “other side” rather than attempting to actually solve problems. And of course, the President of the United States, who first said that the virus was nothing to worry about, who actively tried to prevent “the numbers” from increasing to avoid a stock market fall, who spouts random accusations at enemies in order to distract, rather than attempting to actually explain what’s happening to the American people. In fact, he has actively spread misinformation on numerous occasions, though we can’t be certain if this is intentional, or just idiocy.

As I haven’t been in Europe at all for this pandemic, I can’t comment too much on their dissemination of information. However, the UK’s initial decision to try to achieve herd immunity and then their quick reversal gave whiplash to the public and likely led to more spread of the disease. And there were also reports in the early stages of the pandemic that some European governments or institutions were actively discouraging the use of masks, then again changed course once they realized that masks could, in fact, prevent further spread of the disease. Both of these were crucial messaging mistakes which worsened the situation for their own people.

3. Culture

East Asians tend to be more cautious about their health, at least in some ways. I am not Asian, but my wife is Chinese, so I have gotten a bit of a glimpse into this thought process during this crisis. For all of the Americans complaining about having to wear a mask: When many Chinese students flew home from the UK (or elsewhere) to China, they wore hazmat suits. They didn’t eat from the time their first flight took off to the time their last flight landed, which was often more than 24 hours, sometimes even more than 40. These measures, while they might be a bit overkill, were almost certainly effective in some way in preventing spread of the virus.

Americans, on the other hand, don’t like being told what to do and have an incredible amount of self-confidence, even when they are wrong. This has led to significant healthcare issues in the United States even before Covid-19 was ever seen, from diabetes to tiger bites (shout out to Saff). Because of this part of American culture, we have seen hundreds of protests around the country, demanding that government lockdowns be lifted. These gatherings themselves have reportedly spread the virus further than otherwise, and have also been fairly effective in getting businesses to open earlier than they probably should. We will see if our indulgent culture will be able to add an element of responsibility during this reopening, or whether the disease will have a second wave even before the first wave has finished.

I don’t want to change American culture. I am American, so I understand it and might even feed into it a bit more than I should. But at the same time, I want Americans, and all people, to be as safe and healthy as possible, as I believe that this leads to more long-term happiness, which leads to prosperity. And sometimes you have to sacrifice your immediate happiness in order to make it easier on yourself later on.

—————————————————————————————–

So in summary, these are the main differences I see between initial responses around the world, and what they could focus on to improve their response to a similar future crisis:

Asian countries have successfully revamped their healthcare systems as a response to SARS in 2003 in order to effectively handle epidemics of this nature. While they are not beyond criticism, we should look at their response to this crisis in an overall positive light, and should look to more closely emulate it should a future epidemic or pandemic occur.  

Western European healthcare systems do not have enough space or resources at their hospitals for the number of people that live there if a crisis occurs. As a result, they should push to build more hospitals, or increase the capacity of their existing hospitals, and additionally should try to incentivize students to pursue healthcare-related professions.

The American healthcare system is too expensive, discouraging frequent doctors’ visits, but we do generally have enough medical resources for our people. Our government and our media have not provided a consistent or even coherent message at times, leading to confusion and misinformation. Lastly, our self-confident and leader-wary culture has only worsened and prolonged this crisis, and feeds into the divided media. So here, the way forward is to focus on building trust between our people and our healthcare and government institutions. One important aspect of this is reducing healthcare costs so that Americans won’t feel like their doctors are robbing them blind, and might be more inclined to go to the doctor if they become ill. The other important aspect is to make our politics less divisive and more inclusive of different people and ideologies, though this also does not seem particularly likely.

I want to emphasize that what I’ve written here are mostly my theories and opinions, and are also only considering 2 metrics and my own knowledge of healthcare systems in order to summarize the initial response of an entire country or region. The true picture is certainly more complex, and the answers aren’t easy. However, I hope that this will give some global perspective about the crisis, and will help us all move forward into a new age for humanity once Covid-19 is no longer a headline.

Categories
Uncategorized

AI Tutorial: Static Noise Removal

Hello all! Today, I’m going to share and explain some code which I have been using to clean up the audio in videos using Python. This technique is very simple, especially with the tools that are available, but I’ll take some time to explain what is really happening in the code, and how this type of processing and other more complicated methods can be applied to artificial intelligence agents like Alexa. Hopefully it will give you an appreciation of what goes into audio editing, even though this is just a very simple example. Follow this link to access the code directly.

First, install and import all of the necessary packages to your code, which are listed below. This includes some standard packages such as scipy, matplotlib, and numpy, but also includes two packages which I had never used before called moviepy and LibROSA. I encourage you to read up on these packages, as they provide good functionality for manipulating video files (moviepy) and audio files (LibROSA).

The first piece of code we will write uses a function VideoFileClip from moviepy.editor to read in an mp4 file and separate it into video and audio clips. This function is mostly just making use of FFMPEG (a different library) to perform this separation, but doing this action through MoviePy is much easier from my perspective, with no major downside. After separating the audio from the video file, we write a new audio (.wav) file, which we will load and use in LibROSA later.

The next step is to load the audio file and convert it into a format that can be mathematically manipulated. To do this, use LibROSA’s load command to read in the file that we just wrote using moviepy. Plot the data to make sure you are getting real and sensible values before continuing. It should look something like the blue graph below:

Here is where the data processing begins. Create a new function like the one below:

The first step is to compute the Fourier transform of the signal. This is achieved by using the LibROSA stft (Short Time Fourier Transform) command, which splits our audio data into half-overlapping windows, converts each window to the frequency domain, and returns the values to us. These values are complex (meaning they contain both real and imaginary numbers), and so must be converted into their magnitude and angle, which are achieved by using the numpy abs and angle functions.

The frequency (Fourier) domain may be a bit of a complicated topic for those who haven’t studied it, but I will try to summarize it briefly here. Essentially, this domain is based on the theory that if can record and plot the magnitude of any signal over time or space (for example, plotting temperature over time, or in this case recorded sound over time), that this signal can also be represented in an alternate way: as a mapping of the magnitude of different frequencies. A slow or small change corresponds to a low frequency, while a quick and fast change corresponds to a high frequency. When you convert a signal to the frequency domain, you evaluate how much of the signal corresponds to each single frequency within a wide range (for example, from 0 to 1024 hertz, where hertz (Hz) indicates the number of changes per second). In order to represent a complex signal, you need to combine several, maybe hundreds or thousands, of these individual frequency magnitudes. The signals combine together through simple addition, complementing each other in some instances and canceling in others depending on each frequency, magnitude, and phase (angle). The phase is used to shift each frequency to the left or right, ensuring that the signals combine in exactly the right way in time when the signal is reconstructed. As this is completed in the frequency domain, the shift occurs through multiplication with an exponential function (ejΘ), where Θ is the angle computer above.

The Fourier domain, while it has been used for image processing and in many other mathematical and physical problems, is particularly suited to audio processing, as the main distinction between different sounds is the frequency at which they occur. High pitches have mostly high frequencies, while low pitches have mostly low frequencies. Frequencies that consistently match up after a given number of cycles sound harmonious, while frequencies that are fighting against each other are dissonant. Sounds that don’t have any discernable pitch sound that way because they are the combination of hundreds or thousands of pitches, so no single dominant frequency can be heard. And different combinations are also easily discernable by humans. For example, if you close your eyes and consider the two sounds “p” and “t”, you will likely easily be able to tell them apart. But how could you mathematically define the difference between them? Something like this is only achievable in the frequency domain, and is the basis for all audio processing, including language, music, and many other applications.

So from our code, we have now determined the magnitude (ss) and phase (angle) for each time window of our recorded audio signal. The question now is what we would like to remove. In this case, we are considering a relatively constant static sound that occurs throughout the video. For my videos, if I don’t have a microphone close to the source, there is often static noise that occurs and is distracting from the most important audio in the video. If we assume that this static noise is constant, then we can also assume that this noise is present in the first fraction of a second of the video, while other more useful noises are not. Therefore, if we can characterize this static noise in the frequency domain from analyzing just the first second or less of the video, we can remove these characteristics from the rest of the video.

To do this, we again use the LibROSA.stft function, but rather than taking the Fourier transform of the entire video, we only consider the first several thousand datapoints. In this code, I have considered 8192 datapoints, but this number could be changed depending on when your desired sound begins or ends in your video. For me, the audio frequency was 44100 Hz, so 8192 datapoints corresponds to about the first 1/5th of a second of the video. This was enough for my video, but if It doesn’t work for yours, you could consider changing this to include more or less datapoints. After computing the STFT of this segment, the average magnitude of each frequency is computed. This frequency profile should be approximately the frequency profile of the static noise. Therefore, we simply subtract the magnitude of frequencies in the first 0.2 seconds from the magnitude of the frequencies of each window computed previously (sa = ss – mns.reshape((mns.shape[0], 1))). Finally, the modified windows from the original audio are shifted back to their proper phase using sa0 = sa*b, where b is the exponential function defined above. Lastly, the inverse Fourier transform is computed, and the new audio file is written to a new filename.

The final part of the code is optional, as I discovered that rewriting the video in this way significantly reduced the video quality. As a result, I just combined the original video together with this new audio by using Windows’ built-in video editing tool, and this may be the best option for many people who are reading this. However, to utilize the tools that we have learned today and come out with a complete video, you can replicate the following:

You can load in the new audio file using moviepy’s AudioFileClip, redefine the audio in the main clip (which you can load in the first function we wrote today) through simple assignment, and then rewrite the combined video file.

Now, if you compare the original audio file to the cleaned one, you should be able to notice a significantly smaller amount of noise in the latter. You can similarly compare the original video file to the one with cleaned audio, and should find the same thing.

This concludes my tutorial on static noise removal from video files. I hope that you have learned something that you can apply to your own work, and I hope that you enjoyed my code and explanations.

To hear the cleaned audio, you can follow this link to the Youtube video, and can compare it to this song, whose audio has not been cleaned.

Categories
Uncategorized

The Case for a US National Media Organization (USNMO)

Daniel Freer, April 24, 2020

One thing that has become clear in the years since Donald Trump has become president is the overwhelming bias of national media organizations. Fox News, often a scapegoat for all that is wrong with our country by those on the left, has continued to do what it does best: Unapologetically bash the democratic party while praising everything said by the other side. This behavior is partly driven by their viewers, who have tended to be more conservative for years. However, the other part of this is an attempt to drive their viewers toward the Republican party out of fear that the Democrats will create a godless country with no freedom. During the Obama years, this resulted in smear campaigns, such as the “tan suit” controversy, nods to birtherism theories, the end of Christmas, and many more attempts to label Democratic control as a gateway drug to communism. In the end, all of these turned out to be entirely false, or at the very least non-issues.

However, none of this is new.

What has changed since Trump’s election, or more accurately since Trump began to run for president, is the behavior of left-leaning media organizations such as CNN, MSNBC, and so on. They have also always had bias, but the left’s Trump-based fury has removed the veil of impartiality, to the point where TV personalities are now screaming and crying on air about how much they hate Trump, akin to Glenn Beck complaining about Obama. I don’t mind at all that journalists are expressing their opinions on air, or even that they are getting emotional about political events. People, real people, also get emotional about these things, and seeing somebody on air doing the same can provide a sense of comfort.

However, the issue comes when we don’t have another option to fall back upon. Now, if I go to Fox News’ website, the top story is: “Showdown in Lansing: Lawmakers plan meeting to strip some powers from Whitmer, as protestors gather outside her home”.  On CNN, the top story is: “Trump peddles dangerous cures for coronavirus”. The Fox News headline is referring to protests in Michigan about the lockdown for Covid-19, against Democratic governor Gretchen Whitmer. The CNN headline is discussing Trump’s attempt to calm people about the virus by claiming that some less-than-proven remedies for coronavirus will help to save us, including injecting disinfectant. In this case, the CNN title, and article, is actually more biased, despite actually being factually true. But the important thing to notice is that neither network is discussing good things about their own side: they are both talking about their “enemy”.

So, what is the solution to the horrible era of news media that we are currently experiencing? One possible solution: create a government-run national media organization. Now, I know that a lot of people have valid reservations about state-run media, and there would certainly be a lot of complications, but I’m really just hoping to create a conversation about how to fix our current media environment. So here is my list of things to consider about the potential for a state-run national media in the USA, which could be called the US National Media Organization (USNMO):

  1. State-run media is not beholden to ratings: As I alluded to above, one of the main reasons that our main national media outlets have become so polarized is because of one thing: money. The more viewers they get, the more advertisers they will get. The more advertisers they get, the more money they get, and the more they can expand and “improve”. However, the best way to increase viewership is to publish stories that draw the attention of a lot of people, which means flashy headlines about people with high name recognition doing ridiculous things. It means purposely jumping to conclusions that allow you to make false, or at least unproven, claims about things which people may already be thinking about, even if they haven’t said it out loud. It means making people angry at the opposition, or even angry at our side, if it will help the story to spread across the internet. These tactics have been around for a while, but have become more prevalent due to increased competition from media sources that are primarily online. Clickbait and trolls have proven that things don’t need to be true in order to make money, and people tend to search online for things that they already agree with, leading to more and more desire for extreme content that caters to their worst impulses or biggest fears. The truth and accuracy are often bland, but that doesn’t mean they are unimportant. In contrast, the government-funded USNMO would have reduced need to constantly produce head-turning, and therefore money-making, content. This would hopefully allow more time for “unbiased” facts from a single source that all people could trust, at least as much as they trust their government.
  2. Other news sources will still exist: The presence of the USNMO doesn’t take anything away from the media that currently exists, except perhaps some viewers. One fear that people may have about the USNMO is the possibility for censorship and bias toward the current administration. But censorship can be something completely separate from this. The USNMO could, of course, choose what content to show and what content to ignore, but all media organizations are already doing this, based on their own bias. The USNMO would likely have more rules about what could be said on air, but it wouldn’t have control over what other media organizations could say. If you don’t trust the government and don’t like what they are saying, you don’t have to listen, and still have plenty of other sources to get your information from. The USNMO will most likely be biased toward the current administration, but at least it will make the official government position clear on a larger host of topics. People may worry that having the USNMO will allow government officials to avoid talking to other independent media sources, only making their views known when it is shown in a favorable light. However, Trump’s presidency has also made it clear that this is also possible without state-run media. Even Obama refused interviews on Fox News at times, and now Trump has taken it a step further by barring unfavorable reporters from his press conferences and rallies. The same things may happen after creation of the USNMO, but at least the government wouldn’t have as much incentive to play favorites with private companies.
  3. We could hold the government more accountable: So what kind of content would be shown by the USNMO? In my mind, the main purpose of the USNMO would be to highlight the “good” things that the national government is doing. The President would appoint somebody to be in charge of the entire organization, which would need to be approved by Congress. This person would make the final call about which stories are told and which are not, but there would be several levels of abstraction before it reached that point. Government officials could submit story ideas, pitch bills that they have written and hope to pass in order to get public support or support from their colleagues. There could even be, for example, an hour controlled by Democratic party members, an hour controlled by Republican party members, and another hour for other independent parties to increase their exposure. Or there could be an hour devoted to other departments such as the Department of Education, the Department of Defense, and so on. Of course, these things would be difficult to regulate, for example if one party has control of both Congress and the Presidency. But if this is the case, then most of what the government is doing would probably align with that party’s ideology, so the programming for the USNMO would be heavily biased toward them regardless. Many of the specifics around this would need to be negotiated upon the creation of the USNMO, and could additionally be revisited by the three branches of government as problems arose. All of this is to say that the USNMO could essentially serve as a progress report, delivered from the government to us, the people. And based on what we see there, we could form our own opinion about how the government is doing, and ensure that our tax money is being spent in an efficient way, rather than just going to congressmen who take naps in their chambers.

After reading all of this, I hope that you will understand my point. Nobody that I’ve spoken to in the US has denied that our country is greatly divided along ideological lines. It is easy for Trump to complain about CNN or MSNBC, just as it was easy for Obama to complain about Fox, but they wouldn’t be able to have any complaints about a news organization that they have partial control over, which is intended to highlight the good things happening in their administration. The USNMO would be a great opportunity for our government to evoke a united message to the people, clearly articulating changes in policy, highlighting work they have done, and bringing about a new age of trustworthy American news.