To Liberal America, This is Why We Lost

Despite the odds, according to pollsters and their polling data, it seems that Donald Trump is officially our president for the next 4 years whether you, we, they, I like it or not. Across an overwhelming majority of surveys and forecasts, Hillary was set to win the election this week by a landslide, and this begs the question – what happened? The statement made earlier in the election cycle resonates with me: “It’s Hillary’s race to lose”. It was our race to lose, and we lost it. We can cry, joke, run away, and blame it on the rest of the country, but there will need to be a moment of self reflection. If anything this is a wake up call to the establishment and liberal culture.

The Racist America and the Much Bigger

For those naive individuals that have lived a sociopolitically isolated life and have only been exposed to liberal America, this was a wake up call to the institutional racism that is very much real in American culture and politics. But there is a flip side to this story and that is to say that the Left did not lose the election because a majority of Americans are racist or sexist. To make such a statement is not just polarizing to an already polarized people, but does a great disservice to understanding the goals, the intents, the world views, etc of the 59 million individuals who voted for him. Sure Trump was able to win because certain demographics had racial and sexist intents, this is very certain, however this is not the reason why the Left lost.

The Left lost because it failed to capture the moderates of America that voted for Obama in both his presidential runs. The Left lost because it failed to re-energize a voter base that did not come back to vote because they felt insignificant and alienated by the political system. African Americans didn’t show up. 29% of Latinos voted for Trump, higher then Romney’s record back in 2012 – even though Romney did not make such blatant racial remarks. 42% of women voted for Trump. Clearly the polarizing story of those voting along racial or gender lines, leaves out something much much larger. The inability for Liberal culture to understand why people can vote for Trump despite his remarks, why both Latinos and women can still vote for him; the inability for Liberal culture to understand that there are bigger issues than race or gender is in itself a very strong indicator of how isolated and naive it may be.

If race and gender were an issue that was to be ‘make or break’, as many of us in Liberal America had expected, then this election would have been a landslide in the opposite direction; Latinos, Women, and African Americans would have voted in a land slide against Trump like they did when Obama ran against Romney or McCain. The fact that this is not the case is not necessarily a story of how people don’t care about race or gender, but a story of how there are bigger issues and lines that voters have voted along. The story has generally always been the same – people voted for economic security in an economic system that is rapidly becoming more inequitable, people voted for national security in a global political climate that has been becoming more tumultuous. All are issues that Trump, although in a xenophobic and isolationist matter, strongly addressed as real issues to be concerned about. In fact his platform has been that: to repealing NAFTA, to preventing Syrian refugees, to building a wall, etc.

Meanwhile I cannot stop feeling that the main agenda for the Left has not been to address these issues but to point fingers at our opponent’s controversial remarks. In the words of Bernie Sander’s, we didn’t talk about the issues that really matter, instead we talked about the person. Ironic to think that so many of us who admired this philosophy turned around after the primaries and did just that. We became reality TV, a medium that Trump excels, and that was our demise.

Generalization, the Great Hypocrisy, and Alienation

It’s odd to think that a political culture that often finds itself fighting against the powers of generalization in the form of racism, sexism, prejudice, etc can then turn around and generalize an entire voter base. Liberal culture stresses the dehumanization that results from generalization; it’s ability to take away from the individual’s ability to represent themselves. To label all Muslims as terrorist is to strip away their individuality.

Yet the political conversation for the entire election cycle has been to label certain voter groups – those who voted for Trump, those who voted third party, those who protest voted, and those that didn’t vote at all – as sexist, racist, biggots, sociopaths, uneducated, homophobic, and privileged – to assume the political motivations, inspirations, and goals of every individual. Not to say that Trump is non of these, but the relationship between a candidate and the voter is not necessarily transitive. Just because I voted for Hillary, does not mean that I stand behind her in everything that she has said or done; quite the opposite. To strip them of their individuality and their unique set of beliefs, views, and motives, and to create a certain perception or representation of them has in a way been a form of oppression itself that does not help to be inclusive but to be alienating. We did not try to reach out to them or to try to understand the individual. When you antagonize an entire group of people, it is then not surprising that they would come out and vote against you.

We were angry when Trump made allegations of Muslims and Mexicans. We were angry when conservative media, like Watters’ World, depicted certain communities in a way that helped promote and continue prejudices and stereotypes that people already had. But it’s important to understand that there wasn’t necessarily a moral high ground in this election cycle. Liberal culture and media did the exact same thing to swaths of the American population – people who are an integral and working part of our society whether we like them or not. They were used as jokes, made fun of, and belittled. The uneducated may be ignorant and naive of certain ethnic, racial, gender, etc groups, but in the same way Liberal culture has been ignorant of the uneducated and the non-Liberal.

The Echo Chamber and Safe Space

The fact that the Trump Win was inconceivable to liberal media and to the overwhelming majority of us, should be an alarm to the lack of feedback or internal deconstruction that is prevalent in Liberal Culture. When there is little disagreement and everyone is coercively forced to think a certain way is when the housing market explodes, when economic bubbles burst, when complacency leads to self-destruction. In a sense P.C. culture has gotten so big that the balance between ‘safe space’ and reality may have been broken. An environment that has helped provide those who have been oppressed a place to escape to, if large enough can easily interfere with our ability to keep in touch with the rest of the world. A comfortable environment leads to complacency, much like how a comfortable sofa disconnects us from those dying in the rest of the world.Whether we like it or not, there needs to be a re-evaluation of P.C. culture as it transforms from the force of the anti-establishment to the institution of the establishment.

Despite where you stand on the political spectrum, we must acknowledge that both the fear of economic security amidst a rapidly changing global economy that is increasingly displacing Americans out of the work place and the fear of national security amidst a violent global political climate that has culminated into forces such as the Paris shootings or ISIS are legitimate. While the solutions to these problems in the form of banning all Syrian refugees or building a “great wall”, may not be necessarily healthy to our country, we cannot just ostracize the very discussion of it. Is it reasonable to believe that the immigration of low skill workers are beneficial mainly to employers and businesses and can be detrimental to American’s whose skills have increasingly become obsolete due to the changing global economy? Probably. Is it reasonable to believe that events like the Paris shooting can happen in the States? Probably. Is it reasonable to believe that individuals with these concerns are racist? Probably not. These are legitimate opinions that people have that are void of the racial hatred often attributed to them but rather are honest concerns that these people face day in and day out.

To disregard people’s fears and to attribute them to some defection in their humanity or lack of education says a lot about the lack maturity in political discourse on our part. Furthermore the ability for those to assume the intent or reasonings behind someone’s choices in regards to voting is an absurd proposition in itself. Too often has been the case where honest discourse has either been shut down or more often prevented from taking place, because people fear being labeled by a very label-centric culture, because the argument is too often diverted away from the actual issue and toward the person, or because the discussion is simply turned into a joke. There is no safe-space for people to express their potentially racist ideas in a constructive matter without having the discussion turned against them or memed, and there really needs to be one. And this frustration is what has culminated in the expression of “we’re tired of P.C. culture” that has been expressed so often by voters this year; the uneducated feel oppressed by the sociopolitical atmosphere that P.C. culture has institutionalized.

Not to say that people can’t be racist, but people can make racist remarks or have racist ideas with honest intents and fears that are not rooted from some racial hatred. Not to say that people can’t have inherently hateful racial intents, but lacking a way to differentiate between them and by using the same label, we have in a way lumped moderates who voted for Obama, but are deeply fearful of the current state of the world, with those who are actually members of the KKK. We have given the uneducated to the racists. It is then no surprise that if people didn’t feel comfortable being open about their support for Trump, that we would have a harder time knowing about it, that the pollsters would have a harder time getting accurate data, that we would have a harder time having an honest discussion about the issues that matter to prevent this outcome from happening. Furthermore when you don’t take an entire political movement seriously, then you are going to be the butt of the actual joke. It is then no surprise that we lost in such disbelief, because complacency.

Liberals as the New Establishment

Trump, whether you believe he is one or not, did a better job portraying himself as the outsider – the anti-establishment force that is to knock down Washington unbiased to the forces of Wall Street and corporate money. And this was probably not very hard to do, since Clinton in every way is an insider – part of the Clinton dynasty, current member of Washington and the Obama administration, long history in politics, long history in the DNC, etc. But it wasn’t just Clinton who felt like an insider.

P.C. culture and liberal culture has spent most of its life time fighting against the establishment. The anti-establishment comes with some perks, in that it allows you to better align yourself with the people and the populist vote. It better helps you juxtapose yourself with a system that isn’t working. And as many people stated, this was the election cycle where being an insider was critically detrimental. In a world where the rich are becoming richer and the rest of us are becoming poorer, it isn’t much of a surprise that the hate toward the current institutions and establishment are at an all time high. People want change in a system that seems to be benefiting  only those on the inside. What happened to the Liberal agenda and platform of the Occupy era? What happened to reeling in Wall Street? To bringing change to the economic landscape? To changing the lives of the middle class? To going after the 1%? These were all agenda’s that the Clinton campaign failed to critically carry over from the Sander’s campaign. These were things that I felt like the Left Wing had lost during the campaign trail – the agenda for the people, the issues that really mattered.

Instead we became establishment politics or failed to portray ourselves otherwise. By becoming the establishment, we came to, sadly, represent what we’ve been fighting against for the past years: Wall Street, big banks, big corporations, the political establishment, etc. What have we been fighting for? The Left lost what made the Left the Left, and I can not stop but feel that there was no Democratic vote on the ballot this year, but rather a vote for conservative politics or jingoistic politics. For people who were suffering from the status quo, many simply wanted change, and by no surprise many voted simply against “conserving” the broken system.

Instead of the liberator, we have in a sense become the oppressor. The economic force against the people. The sociopolitical force against the uneducated. For the first time, we find ourselves in a position where it seems like we have aligned ourselves against our voter base and with our enemies. Whether if its true or not or whether we like it or not, there needs to be some fundamental change for both the Democratic party and Liberal Culture if we are to capitalize on what is really capable of America. Unfortunately up to 3 supreme court justices are likely to be appointed within the next 4 years, and this will probably be irreversible until much later in our life times. However this gives me hope that much like so many believe Trump will bring some kind of change to Washington, this series of events will bring some positive change to the Democratic party. And maybe from positive change, some healing to the partisan-ism in this country. This is not to say that I am not disappointed by America’s choice. This is not to say that I don’t feel like people may regret their decision. But since this is a democratic process and 59 million people have made their final decision, we have to move on to the next step.


Mentions:

Good. For months and months this subreddit posted article after article calling anyone not supporting Hillary Clinton sexist, racist, homophobic, uneducated, white privilege trash regardless if they were voting for Trump or a third party. It was just as bad when people were supporting Sanders in the primary.

You are the reason why she lost. You insulted people instead of reaching out to them. You downvoted them and mocked them instead of trying to reach out to and connect with them. You belittled and mocked them for having differences in opinion.

This isn’t on just Clinton.

-Tiamdi

For a group that boasts tolerance and acceptance, I can’t help but feeling a lot of hypocrisy.

-Anonymous

 

Advertisements

Social Safety Nets, Evolution, and the Local Maxima

Sure social safety nets are Samaritan, but is there more to it than just morals? From a pragmatic perspective is there a productive aspect of having a social safety net? Analysis via an evolutionary standpoint can help serve a good reference frame for answering this question.

The evolutionary algorithm can be thought of as a method of problem solving and is often used in the fields of machine learning and AI as an optimization algorithm. The algorithm is much like what you have learned from a basic bio text book and is composed of several discrete steps that continue in a loop. Here is the general idea of these steps via wikipedia:

  1. Generate the initial population of individuals with varying characteristics (first generation)
  2. Evaluate the fitness of each individual
  3. Repeat on the population:
    1. Select the best fit individuals for reproduction
    2. Breed new individuals via crossover and mutation operations to create new individuals
    3. Evaluate the individuals fitness of the new generation
    4. Replace least-fit population with new individuals

But how is simulating the evolutionary process, a form of optimization? Let’s give some context. For example lets look at one aspect of the parenting process; what age should a mother stop taking care of its cub? Too early and the cub does not have the means to protect and take care of itself. Too late and it may drain resources from the mother while not giving the initiative for the cub to learn independence. There is an optimal solution from an evolutionary standpoint and that is the age which allows for the most successful off springs, and this is the solution that the evolutionary algorithm tries to arrive at. This can be graphically represented by the image below,

offsprings vs age of release.png

figure 1

where the y-axis represents the number of successful offsprings and the x-axis represents the age of release by the mother. The relationship between the two variables is defined by the function p(a), the # successful offsprings as a function of the age of release. The optimal age, a*, allows for the optimal number of offsprings, p*. The optimal solution therefore can be represented by the point p(a*). However it is hard to suppose that the relationship between the two variables is one that has one maxima and one that is completely continuous. Various reasons that quickly come to mind e.g. 1) not all points of development are equally substantial to survival of the cub, the completion of the development of a muscle, limb, or any other functional part of the body can make a drastic difference from one that is still in development (discontinuous), 2) not all developments happen together but can happen in succession, the completion of the development of a muscle, limb, or any other functional part of the body may prompt the next stage of development which could leave the infant at a more vulnerable state, which may allow for a time frame in which the cub can learn to get by. The modified model is represented below,

offspring vs age of release 2.0

figure 2

where p(a*) represents the solution to the optimization of the function p(a).

When the genes or the bit representation of the age of release is modified from generation to generation via crossover or mutation, the individuals in the species moves along the given function p(a). Although the movement is discrete, it is limited enough that we can think of it as continuous. Complex and drastic changes to the gene pool happens gradually in most cases because of how the crossover process works. Indeed it is this gradual process that gives the evolutionary process its direction and efficiency. If gene pools were completely random from generation to generation, it would only be by chance that we would reach the optimal solution. This is conceptually equivalent to a bogo sort where there is no guaranteed time until a solution is reached and thus can run forever. It would ruin the point of having genes in the first place. The gene pool is analogous to memory, it stores data about what kind of genes were successful in the past and helps limit the approximations for success. Thus the gradual feature of evolution is a limiter which gives the algorithm direction towards a maxima.

gradual evolution.png

figure 3

The graph above illustrates the gradual change of the fittest individuals from generation 1 (s) to generation 2 (e). (p(as) -> p(ae)); note that this is different from the average fitness of each generation or the fitness of every individual in the generation.

We’ve still forgotten about one crucial variable and that is the number of successful off springs is also dependent on the environment and the abundance of resources that is available. The image below shows an simplified example.

cutoff environment

figure 4

Here the zero-line or the cutoff point at which an individual must lie above to survive and make off springs has risen considerably (from 0 to 0*), possibly due to a drought or some other reason of scarcity. In simulation terms, we can think of it as raising the fitness criteria in order to save processing power. In reality however an environmental change would also have an impact on the function p(a) as well, maybe new environments make new characteristics more advantageous. For the sake of simplicity and for the sake of the argument that we will be making, we will ignore these effects as they will be arbitrary anyways. The orange/red tinged area above represents the total population after the rise in the zero-line, and both the orange/red and purple tinged areas represents the total population with the original zero line. Both can respectfully be represented by an equation of integrals if you want to think in mathematical terms as they are simply the areas under the curves. As we can clearly see and expect, the lower the zero-line the greater the population (total area beneath the curve; orange area vs. orange + purple area). However what is also important to note is that the lower the zero-line the greater the genetic diversity (the domain of the function/length of the area under the curve); there exists a larger variety of ages at which the mother releases her cubs at the original zero line than at the raised zero-line.  The lowest relevant zero-line is at which all variations of the gene pool can have offspring. In this example it is the original zero-line, where all ages on the function exist in the population. However as we can imagine supporting such a population makes the algorithm more resource intensive and possibly less efficient, whether it be more CPU to simulate more individuals or more environmental resources to support that many more individuals.

This leads us to the disadvantages of the evolutionary algorithm. The limiting of the approximation for success is also limiting success itself. Evolution most of the time does not give us the best solution but rather gives us a good enough solution. Biologically we observe these results via imperfections and flaws. In conceptual terms this is the difference between a maxima and a local maxima; the function p(a) in the figures illustrate two distinct hills, but one is clearly higher than the other. As insinuated by wikipedia’s step by step of the genetic algorithm and what we have mentioned earlier, it is the process of the least fit being replaced by the most fit in the next generation which makes the algorithm efficient. If we were to keep all individuals of various finesses, then it would be a complete search for the optimal solution, checking every variation and option rather than making more of an educated guess. Also similar to a breadth first search, it would make each generation exponentially more resource intensive as the population would grow exponentially with no limiter. In philosophical terms, what even is fitness at the point at which every individual survives?

limitation

figure 5

The figure above represents a limitation of the evolutionary algorithm. The function p(a) is characterized by 2 local maximas (the one on the right being the global maxima). The solution to this set is again represented by p(a*). The purple area represents the current population. As you can see the zero-line does not allow for all genetic variations on the function to exist (important is the fact that it does not allow for the successful reproduction of the genes that characterize the trench in between the two maximas). As you can see this poses a problem with the algorithm, since genes are changed somewhat gradually, there is no way for the current gene pool to expand across the dip in the function to reach the true global maxima (a*, p*). This is illustrated by the arrow from p(a1) to p(a2), which is not possible under the current circumstances. Now why would the population be limited to the left side to begin with? It could be a limitation of where the gene pool first began, or it could be a result of some event in the past that impacted the genetic diversity.

If the zero-line rises due to economic or resource deficiency, the zero-line can be lowered via resource or economic abundance. Most importantly, a social safety net can also help lower the cutoff as it provides those with inadequate “fitness” to survive and reproduce. Thus the problem prior can technically be solved via allocating more resources or socially via a social safety net.

solution.png

figure 6

The figure above shows how lowering the zero-line now enables the population to reach the global maxima (in this case the optimal age of release by the mother). Notice how the path from p(a1) to p(a2) is now possible. If we are running an evolutionary algorithm on a simulation, and we believe that the process is stuck in a specific hump of the function like above, we can simply add more computational power and enable more diversity in the population. Likewise a population can achieve similar results by expanding its genetic pool by taking care of the less genetically fit (in other words by spending more resources).

Why does this all matter?

it matters

figure 7

The figure above depicts a negative change in environment. Due to a lack of resources, the zero-line has risen considerably high. Most importantly, the zero-line is above the local maxima on the left, and if the population were limited to the purple area it would surely go extinct. Out of the whole function it would only be a population that exists in the red shaded area that would be able to survive this event. Thus is the importance of genetic diversity. In essence, social safety nets is a population’s investment for a better solution; a better solution which will be its insurance in times of hardship. In a Darwinian perspective, the idea of caring for the less fit is not merely just a result of the development of social behaviors (strength in numbers), but it also has its advantages too (even if it is costly in terms of resources available to the population).

Some last remarks: the cost of a social safety net can effect the fitness or the success of those that are fit and effect the function p(a) (since they may be spending some of their resources), possibly reducing the heights of the local maximas. However the point of the whole process is to simply increase genetic diversity as the social safety net can easily be eliminated if needed (especially in times of scarcity) and thus those effects can in the short run be nullified or changed. Genetic diversity however takes longer to develop. The idea of investing in prosperity and saving in scarcity is not a practice that is limited to human beings. Thus as a long term insurance policy, the system still makes sense.

An Alteration of Asimov’s Parable: The Paradox of the Efficient Market Theorem & Quantum Economics

“God does play dice”

Let me begin with a slight alteration to Asimov’s parable from Robots in Time –

A scientist travels two centuries into the future and is shown a civilization where mankind has succumb to the decadence of robots. When he returns and reports this, one of the persons who hears his account is a prototype human-looking robot. The robot then buries a note for the robots of the future to discover so that they can convince the time traveller that humanity will triumph.

The prior is an illustration of a paradox – the robot’s actions nullifies the original prediction; however, if the original prediction is nullified then the original prediction could have never taken place, and the robot could never have taken advantage of it; this in turn, nullifies the robot’s actions which allows the original prediction to take place, etc etc.

This is not only a problems posed by time travel, but it also illustrates a problem with any theory where a deterministic outcome is itself a variable for its own outcome. This is very much true in classical, neoclassical economics, and the efficient market theories – as I like to call them “deterministic economics” – where there is one expected outcome through the amalgamation of rational decisions. Markets are always optimal in the long run, where the price of the commodity reflects its true economic values -as Hayek referred to the markets as “telecommunication systems”, which disperses information via prices. However if decisions are truly made rationally, then they would have to take into account the prospects of the future; the expectations of the markets are thus a variable of rational economic decision making; this is especially true for investors and stockbrokers. Now, as exemplified in our version of Asimov’s parable, we have a paradox. If the models make a prediction, then the rational decision makers will now change their behaviors to take advantage of the prediction, nullifying the original prediction. Now if we predicted that the original prediction would be nullified, then that would nullify the original motive for rational decisions to change behaviors, etc etc.

Economics thus cannot be deterministic at least to be consistent with itself. Deterministic economics isn’t that far-fetched however. If we believe in rational decision making, then we would expect certain outcomes in both micro and macro – that which benefits the individual the most. This is exemplified by indifference curves on the micro-level and by market equilibriums at the macro level. And there is plenty of evidence to believe that both society and nature work in a rather rational manner.

However, countering this train of thought and an attempt to explain why, is the idea of how the rational decisions of individuals may inevitably manifest in unpredictable and irrational decision making. Decisions are interdependent; at least in well developed markets, no longer is only the utility of a product important (the useful-ness of the product to the buyer) but also is the perceived utility for others (the useful-ness of the product perceived by other buyers). This is especially true in the financial market but is also true for commodities, which are used as financial instruments. A stockbroker will not purchase stock that he himself thinks is valuable; the stockbroker will buy equities that he thinks others think is valuable. If all stockbrokers are thinking along these lines then they will buy the equities, which he thinks others will buy – equities that they think that others will think are valuable! The competition between rational players undercuts rational decisions until it manifests itself in seemingly unintuitive results. The markets thus will not act uniformly and predictably because there are constant outliers attempting to take advantage of the outcomes of the market. If the markets are indeed efficient as stated by the efficient market theory, then the individuals who try to capitalize on the efficiencies of the market inevitably make it inefficient.

It is important however to note that this may not apply or apply to the same degree for primitive markets, where the consumers are only concerned with consuming rather than investing; where the prospects of the market are not a variable (or not as much) in decision making. Perhaps these markets are efficient and works as Hayek depicted them, however the markets of today are surely much more developed.

How do we reconcile this fact in to our economic models, where a number of variables are simply inserted to get one correct answer? Perhaps the markets are not as efficient as we would like them to be. Perhaps Minskys’ theories on inherent market instability is more reflective of the truth. Perhaps our expectation for one answer is wrongly placed; perhaps there is no determined future to predict. As the ole’ quote from quantum physics goes “god does play dice”. Maybe like quantum mechanics, we should be attempting to find the probabilities of outcomes instead – call it “quantum economics”. Although this may also be affected by those outliers who work against such predictions, it would be much closer in reflecting reality – something our models need more of.

“God does play dice.”

Kai Matsuda

Critique of BitCoin and the Search of a Perfect Monetary System

Are algorithmic digital currencies the inevitable future of monetary systems? Despite their many advantages over traditional forms of monetization, digital currencies like BitCoin and DogeCoin have yet a series of limitations that they will have to overcome.

The strength of digital currencies is largely attributed to its universal potential. 1) BitCoins can be used in any country and 2) no governments can easily manipulate it. These two points come hand-in-hand, and the universality of bit coins relies on its lack of regulation. However, its strength is also its weakness. Monetary policy, monetary regulations, is economic policy. Without the ability to control money supply and flow, governments do not have an effective means to regulate their economy.

In an ideal monetary policy, there would be no need for government regulations. There would be no need for governments to adjust the “inflow” of money supply into the economy. To put it simply: 1) countries will only use BitCoins, if 2) it does not require regulation. And it will only not require regulations or adjustment to money supply, if 3) its inflow algorithms are ideal or perfect. However, I argue that these algorithms are not ideal or perfect. In which case, adjustments and regulations to the money supply will have to be made in the future, undermining the whole point of BitCoins.

Monetary policies by federal banks are coordinated with targeted inflation rates. Money is either taken out or put into the economy until the target inflation rate is reached. The strength of currency relies on its reliability to keep a fixed “worth”. Inflation is the rising of price levels, or in other words, the degradation of your dollar bill. If inflation rates are too high or too low, your dollar bill fails to hold a fixed “worth”; in other words, your currency fails to be useful. The gold standard is a highly ineffective monetary policy because of the inability of gold to hold a fixed value.

The money supply of BitCoins is determined not by a targeted inflation rate but by an algorithm, which is certainly a revolutionary way of thinking about money supply. However, to my understanding, the algorithm is in no way related to an inflation rate or anything economical. The principle of BitCoin’s algorithm relies on diminishing marginal returns. BitCoins are generated by computers, which “mine” for bit coins. However the rate at which computers can produce coins diminishes. In other words, it becomes harder and harder to “mine” for bit coins. Mathematically put, the amount of bit coins entering the market decreases at an increasing rate with time. An estimation of this idea is represented in fig 1.

Fig 1. Bit

As bit coins become harder and harder to mine, the more computation is required. This is why this algorithm has 2 major deficiencies: 1) it concentrates the production of wealth to those with great computational capacity (people who possess the capital to afford it), which will in itself contribute to an increasing wealth gap and 2) it is very unreliable in its ability to hold a fixed worth or inflation rate.

To abstract on the second part: The Money Supply and Wealth Disparity Theorem, states that in order for currencies to hold a fixed “worth”, the money supply must increase at the same rate as the rate of wealth. For an example, if your economy consists of $10 and 20 identical bananas. Each banana is going to hold a worth of $0.50, and each dollar is going to be worth 2 bananas. If 20 additional identical bananas are added to the market, each banana will now be worth $0.25 and your dollar is going to be worth 4 bananas (fig below).

Screen shot 2014-06-06 at 9.21.35 PM

Your dollar just gained value, and you have just experienced deflation. The only way to keep the worth of a dollar constant is to add the same ratio of dollars into the economy, in other words, to add $10 to the economy with the 20 bananas (fig below).

Screen shot 2014-06-06 at 9.23.03 PM

The models above are quite a simplification of reality. In the real world, the dollar does not represent the amount of bananas in the market; the dollar represents the amount of wealth in the market. However, the simplifications helps to illustrate that the disparity between how much of a currency enters the market and how much wealth enters a market leads to unstable price levels. We already know the amount of bitcoins entering the market is becoming slower and slower (fig 1). A constant price level can only be held if the amount of wealth (bananas) entering the market is the decreasing at the same rate. We represent this idea with the fig below.

Fig 3. Bit

where p* (the difference between the rate of bitcoins and bananas entering the market) represents price levels. However, data suggests otherwise to the model above. The rate of production has actually been increasing at a decreasing rate with the increase in efficiency in technology and the greater participation of developing countries. In reality our model looks like this:

Fig 2. Bit

as we can tell price levels change at an increasing rate with this model. It is important to note that BitCoin can be divided infinitely. For example, you can transact 0.000000001 BitCoins. However, precision in accounting does not matter if the prices of goods are changing rapidly; you still don’t know how much 0.00000001 BitCoins is worth or is going to be worth tomorrow.

In order to keep price levels constant, we must keep monetary inflow and wealth inflow the same. How do we determine wealth? The strength of digital currencies come from their decentralized nature. Allowing centralized entities or governments to determine what wealth is can be both dangerous and counterproductive to the nature of digital currency.

Our only hope for determining a perfect algorithm for money supply can come from defining wealth in an objective and a decentralized manner. To do so we must first explore where wealth or value is produced. Wealth or value in a product is first produced with “harvest-ation”, the extraction of natural resources from the earth. This is what we will call the “natural value” of a product. Wealth is also created when the natural resources are then through labor constructed or produced into more complex products. This is what we will call the ” value of manufacture”. Wealth is then created through transportation, retail, and marketing. All of these make up the final value of the products we buy. In other words, every time a product is passed on “value” is added onto a product (fig below).

Screen shot 2014-06-06 at 9.45.02 PM

But how do we determine the amount of wealth that is produced with each step? Again there are certain dangers in allowing a centralized force in determining these numbers. To truly decentralize the calculations, we can simply take in to account the prices at which these products were bought/sold. Mathematically we can calculate how much value was added in a step by considering the difference in original value in which it was bought and the retail value at which it was then sold. If 100 pounds of iron ore are extracted from the earth and is bought by an smelting company for $7 then $7 dollars worth of wealth is added into the market. If the smelting company then smelts and refines the iron ore into 50 pounds of iron and sells this at $10 then $3 of wealth is essentially added into the economy (10-7 or the total value minus the natural value). If it were possible to calculate all the aggregate transaction in the economy, we are one step closer to creating an algorithm of money supply that is independent of centralized entities.

There is however one last problem to our method. Wealth has a tendency to degrade. If we have 20 bananas and $10 dollars in an economy… (we already know what this model looks like: $1 = 2 bananas). Instead of adding 20 bananas, what happens if 10 bananas are eaten or simply goes bad? $1 is now worth 1 banana. In other words, our algorithm will also have to be able to compensate for the amount of wealth that degrades over time. Just as we do not stay the same over time, no product stays the same over time. A new building today worth $1mil today can be worth $9k in a year due to wear and tear. In the same way all new wealth must be accounted for, the algorithm must also take into account all products that are destroyed or thrown out and all drops in price value. Such a monetary system would only be possible at the expense of an automated “super- infrastructure”, which would be able to keep track of all transaction without the interference of centralized forces. Altho science fiction today, such a system may be the possible future of our monetary systems.

Privatization vs. Communization; In Search of Resolution

It is indeed the philosophies of ownership of property that sets economic/political parties apart, and often is the argument between private and public property, the privatization of goods and the communization of goods. For the longest time I’ve set my allegiances to the ideas of publicly owned goods/services, but the truth is that neither system is capable of a perfect society that we often read about in utopian literature.

The philosophies of hardcore libertarians (the conservative kind that is) on privatization are based on the ideals of John Locke, who essentially believed that a man has the unalienable right to the “fruits” of his labor. This is a moral that I essentially agree with, meaning that it is a set of moral principles that I believe society can be constructed on and are aligned with the qualities of human nature such as jealousy.

However, there are limitations to the implementation of such a notion. Not all things are a result of labor. For example, land is not a result of someone’s work. It exists independent of man. It is thus very difficult to justify someone’s ownership of land via Lockian ideals. The same applies for capital. Money from places such as inheritances is not a result of an individual’s effort. Simply privatizing everything creates contradictions within the morals of a society, which gives the sense that a society, which is supposed to be fair, is indeed not. Contradictions lead to conflicts and such is the way of world. These contradictions could possibly be relieved if those goods void of labor are publicized/communized or made non-excludable. The opposite of Lockian philosophy still holds in that no man has the right over the things that are devoid of his labor; it is simply not fair for one to claim something that is not “his” under the morals of “private property”.

However, implementation may be difficult. If all land and capital are in reality public and if all goods are produced from land and capital, then we are essentially saying that nothing is truly 100% “private”. For example a wooden chair is made from trees that came from a plot of land that was essentially public. The true value of someone’s labor remains in the difference between the value of the trees/land in their natural form and the value of the chair, the finished product. Mathematically this can be represented by the following equation:
Screen shot 2013-10-05 at 12.22.03 PM, where F represents the “fruit” of someone’s labor, the excludable value, Vf represents the value of the finished product (the wooden chair from the previous example), and Vo represents the value of the raw materials that were gathered and used, the non-excludable value.
F is the true value that a worker added through his efforts; it is the true value of the “fruit” of someone’s labor.

Although nothing is truly “private”, it does not solve the issue to publicize all goods and service, because the Lockian principles still exists as the under workings of our society. The mixed economies of today posses the potential to alleviate some of the contradictions that are a result of poor implementation by using both privatization and publication. And I do believe, this is the reason why they tend to be more successful. However, alleviating all contradictions as I discussed requires more precision, which may probe to be very difficult with the economic understandings we have today, but I do believe that resolution lies in solving this very contradiction.

Nuclear power, Nuclear weapons, and a Nuclear Civilization

Its hard to decide on whether which choices are correct, but it is important to understand the basic implications of the choices we make or the choices we do not make. We cannot analyze the situation until we begin to comprehend the situation. With that being said, I do not intend on giving a pro vs con argument in terms of whether Nuclear power is good for us, but I intend on exploring the implications of Nuclear power on human history.
First of all, it is important to note that there is a important relation between productive power and destructive power; they both are the artificial manipulations of power. Fire and wood have little potential for productive energy but also has little potential for destructive energy. Nuclear power however, both fission and fusion, is understood to be a pandora’s box of enormous energy. It is due to this fact that it posses some risks. However it is not only the risks of harnessing electricity from Nuclear power that we are concerned with. It is also important to note the correlation between the use of nuclear power and nuclear weaponry.
At the moment, one of the major setbacks for Nuclear energy is that it is not cost-efficient. It requires heavy subsidies from governments worldwide. However this is only a temporary problem. It is not hard to assume that with advancements in technology that the efficiency of such procedures will increase bringing costs down. However this will not only make nuclear energy more affordable and more accessible but this will end up having the same effect on nuclear weaponry.
The democratization of such powerful firearms would have adverse effect on civilization globally. George Orwell’s philosophy is that the accessibility of “the mode of warfare” effects the level of organization in human systems. This means that when more powerful weapons become more accessible it tends to democratize civilization because it gives power away from dictators/authorities and gives them to the masses. The discovery of gunpowder and the gradual accessibility of it, in the eyes of Orwell, led to the establishment of democracies in the West. Orwell thus believed that there would be a pendulum of power between centralized powers and the masses between eras of discovery and eras of the democratization of those discoveries.
I propose however that there might be a much larger trend at play. It is the tendency of technological advancement under the current motives set by the current system to make more and more powerful tools at a more and more affordable/efficient rate. At first this allows for the construction of centralized systems and civilization in general, but at a certain critical point it begins to become deconstructive. The democratization of nuclear weaponry would surely make it harder for governments to retain centralized control. This includes even democratic systems which are ruled by the majority; democratized nuclear weaponry will give enough potential power to the minority to offset this balance of power. And here is my one critique of Orwell’s perspective: there is a point at which the power of new weapons become no longer relevant; there is no difference between the effects of a weapon that can destroy a planet and a weapon that can destroy the galaxy in perspective of our civilization at the moment. The democratization of either weapon would have the same calibre of effect on the formulation of governments. So it does not matter if a government discovers a more powerful source of energy then nuclear power; it will not necessarily lead to more authoritarian control. And this is the theory of “constructive/deconstructive civilization”.

Fig 1.

Fig. 1 is a graphical representation of this concept. The x-axis represents technological advancements from very weak tools (left) to very powerful ones (right). The y-axis represents the formulation/organization of human systems from decentralized (bottom: anarchy) to centralized (top: authoritarian). The “construction/deconstruction curve” (CD Curve) directly correlates with technological advancement until a critical point is reached (LT) at which it becomes indirectly related. Although such a theory has yet to be proven or seen I’m sure there is enough empirical and rational evidence for it to be considered as a plausible explanation for the phenommenon.

Thought Experiment on the Existence of Time

Does time exist independent of perception? Or is it a result of perception?
First of all, we need to carefully define time. Time is the distance (duration) between events; it is what makes events unique to themselves.

Screen shot 2013-04-30 at 2.33.38 AM

Fig 1. represents time as the duration/distance between two events: lunch and dinner. Let’s say that the approximate time between these 2 events is 6 hours.

However how one experiences time is dependent on other factors i.e. biology and Einstein’s theory of relativity. Biology states that our internal clocks can change our perception of time (which could explain the saying “time flies as you age”). There have been multiple occasions in the world of neurology in which patients have reported changes in the rate in which events passed (the rate of time) from physical changes in their anatomy. The Theory of Relativity states “time dilation”: how you experience time changes dependent on your velocity.

Now imagine that there is someone who experiences time twice as fast as the person in Fig 1.

Fig 2. Represents person 2 who experiences time twice as fast as person 1. We write the shortened distance as 3 perceived hours.

Screen shot 2013-04-30 at 2.38.42 AM

The faster the subject perceives time, the shorter the distance between events get. Fig 3 represents the inverse relationship between t, the duration between events, and v, the velocity at which the subject is perceiving time.

Screen shot 2013-04-30 at 3.14.52 AM

What if we keep on shortening this distance? Here is the thought experiment: if the subject were to experience time at the rate of infinity, v=infinity, the distance between events would become infinitely small, t=0. Mathematically we can represent the process by the following:

Screen shot 2013-04-30 at 2.47.49 AM

Thus if you were the one to experience v=infinity, all events would reach singularity and converge into one event. In this case, time does not exist but all events become one supra-event (represented by the fig below).
Screen shot 2013-04-30 at 3.48.22 AM

Although such experiences are likely not to exist, it tells us that the existence of time is reliant on how we perceive our reality and is not an objective configuration that exists independently of human cognition. If there was anything to be learnt by the rationalist school of thought, like Descartes, is that entities, which rely on human cognition to exist, cannot be validated to exist in reality independent of perception.

Action and consequence are one. Only time separates action and consequence; time is artificial; and so, there within the action lies its consequence. -Buddhist Philosophy