Transcendence (2014)

02 Jul 2014

incorrect physics/sciences/engr
thought provoking thinking
emotional resonance
empathy and sympathy

fresh perspective
layered meaning (good for over-thinking/multiple viewing)

immersive atmosphere
beautiful scenes

acceptable suspense (developments predicted minutes before)

intense action
realistic effects

untraceable (natural) acting


I could (probably am) very well be overthinking (over-estimating) the writers’ intentions, but I most definitely found several important and good message out of the film. However, that is not to say that film is excellent; many mistakes and inconsistencies exist.

First of all, the obvious antagonist (and rightfully so) is the terrorist group/leader (and inferior, non-rational human nature). And I am very glad that the film sutblely pointed it out by showing the positive nature of AI-Will. It is very interesting to point out how Will thought Max, Joseph, and the monkey doctor is not as “intelligent” as himself; the statement is arrogant but true. Yet clearly Will could have done the things better but that could be the Captain Hindsight working in my brain (probably not for some issues at least).

Second of all, I had paused and predicted the illogical nature of the terrorist group activities right before Will said it. Bravo.

Now let’s discuss the things that I got out of this film:

First message I found is in line with AI-Will and Will's saying; you can't not be afraid of what you don't understand; you can't not be afraid of what's more powerful than you. Now this is not only depicted by the misunderstanding of AI-Will by the other researchers, I had experienced it first hand with the always intimitaded mourning doves (by my attempt at petting and feeding behaviors, and notice how in the very end I feel that they are so ungrateful thus almost when out of line and forced my hand on the baby dove for petting; how interesting! that is exactly what AI-Will did after the first violent pushback! (substitute petting with improving Earth and baby dove with human)) and my personal fears of randomly murderous psychopaths (because I would not be able to understand thus predict them. Yes! that is where this fear originate! Because I don't understand and inferior power-wise, I cannot guarantee the other parties behavior! and does this mean that life/living things are inherently logical/rational deep inside the subconsciousness?!).  
Second is that as a result of fear, we humans (living things) choose to isolate, factionalize, and seize from direct confrontation and communication. Clearly the terrorist leader girl did not openly discuss with her professor on her concerns but decide to resort to the hateful unproductive shitting barbaric violence; yet even Everlyn had been a horrible partner since the inception of AI-Will - she did not attempt at reasoning or discussing with AI-Will despite her uncomfortableness and dread. And given the film's preception that this new AI-Will, no matter evil or benign, is enlightened thinker, forward discussion on how certain change is out of line, endangering others, inciting fear from others, could've save both party so much loss! Yet I can sit here all I want but I myself cannot completely walk out of this shadow. Luckily I believe I can achieve Franklin's vision of true collective delibration when I am interacting with people I feel close with (in other words, the tragedy of the film would (hopefully) be prevented all together should my future partner become AI God of of the world.) [proof: remember talking about the music disagreement with Dawson? maybe I remember my "achievements" too much. I mean I did not reconcile with Rikki on that crying blow-out D: ]  
Bottome line, you can only comprehend upto your own imagination (thus it would be your projection, goes back to the idea that true empathy is impossible without brain2brain network communication; we can only build simulation of others in our own mind).  

Finally let’s get to the problems of the film. First and foremost is that the end sequences are clearly physic/logic defying. Specifically, I don’t see how those solar panels can provide enough energy to form clouds and clouds of very intricate nanotechnology robots even if they use surrounding energy to replicate (I mean that would require them to somehow convert energy from thin air, what? would that not suck the life force of its surroundings?). But should I suspend that disbelief, AI-Will told me how he don’t have enough energy to both save Everlyn and upload the virus. Cough[Bullshit]. (don’t get me wrong, I loved the tragic ending.) Secondly, AI-Will’s intelligence fluctuates immensely. He could cure cancer, alzeihmers’, etc. yet could not find/fix the exploit that Max’s virus used; he foresaw that it will be dangerous for Everlyn to stay at the center yet could not try to use less fear-inciting defenses to appease the feds (or Everlyn. My god talking thru a proxy like Professor is creepy when you are trying to communicate love) . Also Everlyn just simply used some shit buzz words when trying to figure out how to work AI-Will. BOO give it some effort.