Arrival
Just like what Eric has said, the worst case is if your AirAsia flight arrives at the furthest gate from immigration. Unfortunately, if I fly with AirAsia or Jetstar most certainly I will depart/arrive from far gates (C16-19, C24-26, D35-38, D46-49). In addition to that, when you arrive to Singapore sometimes there will be a random check on handcarry item (I encountered it a few times, and perhaps due to Jakarta bombing last week there will be checking again).Walking takes around 10 minutes, x-ray check takes at most 5 minutes, immigration takes around 10-15 minutes. So you will spend around 30 minutes for your arrival.
Transfer
This depends on your luck. You will have to take Skytrain from T1 to T3 (go up to departure hall first). If you arrive from C concourse, you will be lucky because C is nearby Skytrain to T3; but if you arrive from D concourse (which is close to T2) you have to walk quite longer. Worst case will take you around 10 minutes until you reach Terminal 3.
When you arrive at Terminal 3, the closest check-in row is row 11 while Lion Air is located at.... row 1. So you have to walk to the other end of departure hall. Will take you around 3-5 minutes and in total to change terminal you need around 15 minutes.
Departure
I've flown with JT163 (Lion Air's Singapore - Jakarta afternoon flight) for several times. Checking-in your flight will not be very long as Changi is very efficient, so I think 5-10 minutes must be enough for check-in. The bad thing is about gate. The flight is often (if not always) placed at A16-A20 gates, and walking from departure immigration to the gate can take 15 minutes alone. So far (surprisingly) I have never experienced any delay with this flight, so if you often hear about Lion Air's bad reputation in terms of on-time performance I don't think you should expect that in this flight. I guess for departure you need 30 minutes.
Of course my approximation is a bit exaggerated, but it is better to spare some time right? So I recommend you to take the earlier flight. Good luck, and I hope your trip will go smoothly :)
Fully Answer Here
Tuesday, January 26, 2016
What is an intuitive explanation of the natural proof, relativization, and algebrization barriers to solving the PP vs. NPNP problem?
"Intuitive" is hard because some of these results are pretty nonintuitive. I'll just tackle the relativization barrier.
The Baker-Gill-Solovay theorem is that there is some oracle A "relative to which" PA=NPAPA=NPA and a different oracle B, for which PB≠NPBPB≠NPB. What does this formalism mean? Normally when we talk about the complexity class PP it refers to languages recognized by Turing machines in a polynomially-bounded amount of time.. But for PAPA, the Turing machine has a subroutine (the oracle) which can recognize any language AA, or a class of languages AA, in one step.
For example, P3SATP3SAT is the complexity class of languages which a Turing machine can recognize in polynomial time, if it has access to a magic 3SAT solver. This complexity class is at least as large as NP, since 3SAT is NP-complete. PPPP is an example of using an entire class as the oracle, rather than an individual language.
What the theorem showed is that an oracle for TQBF (True Quantified Boolean Formulas) is strong enough to make PTQBF=NPTQBFPTQBF=NPTQBF. TQBF instances are logical formulae containing existential and universal quantifiers--- that is, statements like "for all X, there exists Y and Z, such that X and Y imply not-Z". TQBF is a PSPACE-complete problem.
The details are technical but basically the proof looks like NPTQBF⊆NPSPACETQBF⊆NPSPACE=PSPACE⊆PTQBFNPTQBF⊆NPSPACETQBF⊆NPSPACE=PSPACE⊆PTQBF. TQBF doesn't add anything to NPSPACE (nondeterministic polynomial space); a previous theorem shows NPSPACE = PSPACE; but because TQBF is NPSPACE-complete, a Turing machine with access to an TQBF oracle must be at least as strong as PSPACE.
The other half of the theorem is coming up with an oracle B that definitively makes nondeterminism stronger, so that PB≠NPBPB≠NPB. The general approach is to define a very simple language: SL(B)SL(B) which recognizes all strings exactly the same length as something in language B! (Which we haven't defined yet.) Obviously an nondeterminstic Turing machine can just "guess" some word of the right length and then use the B-oracle to verify that the word is in B. The hard part of the proof is coming up with some B that a deterministic Turing machine can't generate the inputs to, in polynomial time--- so the oracle doesn't help. The proof uses diagonalization, but the details are really hairy.
OK, so what does that show?
What the theorem gives us is that any "proof" or proof technique for P=NPP=NP or P≠NPP≠NP that isn't sensitive to the presence of an oracle, cannot work. That's because we showed that, depending on the oracle, the two could be equal or not equal.
Suppose a colleague slips us a paper purporting to show P≠NPP≠NP. We can search-and-replace every occurrence of PP by PTQBFPTQBF and NPNP by NPTQBFNPTQBF. Now obviously the proof must now be flawed, because the substituted classes are equal. So (if the proof is correct) somewhere it must make use of a property of PP that is not true of PTQBFPTQBF.
"Duh!" you might say. Well, there is one very common proof technique which relativizes (i.e it is not sensitive to the presence of an oracle): Diagonalization.
The Time Hierarchy Theorem was proven using diagonalization. It says, roughly, that for every big-O time class, there are problems that take at least that much time on a Turing machine. So there are problems that take O(n5000)O(n5000) time to solve but can't be solved in O(n4999)O(n4999). The proof is equally valid if we substitute in a Turing machine with an oracle! Even our monster PTQBFPTQBF has problems that can be solved in O(n2)O(n2) but not in O(n)O(n).
Simple diagonalization is essentially a "counting" argument, which doesn't depend on the structure of the problems involved, only the final result. And thus it can't solve P vs. NP. (More sophisticated versions of the proof technique exist, though.)
An analogy that may be helpful is proving statements about numbers. There are some statements you can prove about integers that are independent of whether you're working in ZZ or in ZmodpZmodp. For example, we can show that 2+2=2∗22+2=2∗2 in any number system obeying the usual definitions of addition and multiplication. But in some systems 2+3≠02+3≠0 and in other systems 2+3=02+3=0. Thus, our number theory proofs "relativize" if they are ignorant of the "mod p" but are "nonrelativizing" if they depend on the particular modulus, or lack thereof.
The Baker-Gill-Solovay theorem is that there is some oracle A "relative to which" PA=NPAPA=NPA and a different oracle B, for which PB≠NPBPB≠NPB. What does this formalism mean? Normally when we talk about the complexity class PP it refers to languages recognized by Turing machines in a polynomially-bounded amount of time.. But for PAPA, the Turing machine has a subroutine (the oracle) which can recognize any language AA, or a class of languages AA, in one step.
For example, P3SATP3SAT is the complexity class of languages which a Turing machine can recognize in polynomial time, if it has access to a magic 3SAT solver. This complexity class is at least as large as NP, since 3SAT is NP-complete. PPPP is an example of using an entire class as the oracle, rather than an individual language.
What the theorem showed is that an oracle for TQBF (True Quantified Boolean Formulas) is strong enough to make PTQBF=NPTQBFPTQBF=NPTQBF. TQBF instances are logical formulae containing existential and universal quantifiers--- that is, statements like "for all X, there exists Y and Z, such that X and Y imply not-Z". TQBF is a PSPACE-complete problem.
The details are technical but basically the proof looks like NPTQBF⊆NPSPACETQBF⊆NPSPACE=PSPACE⊆PTQBFNPTQBF⊆NPSPACETQBF⊆NPSPACE=PSPACE⊆PTQBF. TQBF doesn't add anything to NPSPACE (nondeterministic polynomial space); a previous theorem shows NPSPACE = PSPACE; but because TQBF is NPSPACE-complete, a Turing machine with access to an TQBF oracle must be at least as strong as PSPACE.
The other half of the theorem is coming up with an oracle B that definitively makes nondeterminism stronger, so that PB≠NPBPB≠NPB. The general approach is to define a very simple language: SL(B)SL(B) which recognizes all strings exactly the same length as something in language B! (Which we haven't defined yet.) Obviously an nondeterminstic Turing machine can just "guess" some word of the right length and then use the B-oracle to verify that the word is in B. The hard part of the proof is coming up with some B that a deterministic Turing machine can't generate the inputs to, in polynomial time--- so the oracle doesn't help. The proof uses diagonalization, but the details are really hairy.
OK, so what does that show?
What the theorem gives us is that any "proof" or proof technique for P=NPP=NP or P≠NPP≠NP that isn't sensitive to the presence of an oracle, cannot work. That's because we showed that, depending on the oracle, the two could be equal or not equal.
Suppose a colleague slips us a paper purporting to show P≠NPP≠NP. We can search-and-replace every occurrence of PP by PTQBFPTQBF and NPNP by NPTQBFNPTQBF. Now obviously the proof must now be flawed, because the substituted classes are equal. So (if the proof is correct) somewhere it must make use of a property of PP that is not true of PTQBFPTQBF.
"Duh!" you might say. Well, there is one very common proof technique which relativizes (i.e it is not sensitive to the presence of an oracle): Diagonalization.
The Time Hierarchy Theorem was proven using diagonalization. It says, roughly, that for every big-O time class, there are problems that take at least that much time on a Turing machine. So there are problems that take O(n5000)O(n5000) time to solve but can't be solved in O(n4999)O(n4999). The proof is equally valid if we substitute in a Turing machine with an oracle! Even our monster PTQBFPTQBF has problems that can be solved in O(n2)O(n2) but not in O(n)O(n).
Simple diagonalization is essentially a "counting" argument, which doesn't depend on the structure of the problems involved, only the final result. And thus it can't solve P vs. NP. (More sophisticated versions of the proof technique exist, though.)
An analogy that may be helpful is proving statements about numbers. There are some statements you can prove about integers that are independent of whether you're working in ZZ or in ZmodpZmodp. For example, we can show that 2+2=2∗22+2=2∗2 in any number system obeying the usual definitions of addition and multiplication. But in some systems 2+3≠02+3≠0 and in other systems 2+3=02+3=0. Thus, our number theory proofs "relativize" if they are ignorant of the "mod p" but are "nonrelativizing" if they depend on the particular modulus, or lack thereof.
What are some things that happen in movies that most people think are bullshit, but are actually true?
I don't know if this is something people would "call bullshit" on, but it's a pretty cool moment captured on camera.
In Quentin Tarantino's Django Unchained, Calvin Candie (Leonardo DiCaprio) confronts Django (Jamie Foxx) and Dr. Schultz (Christoph Waltz) in arguably the most climactic scene of the entire movie. In a fit of rage, Candie slams his hand down on the table and accidentally crushes a wine glass, leaving a large gash in his hand. Without breaking character, DiCaprio finishes the scene with copious amounts of his blood dripping down his hand and arm. He even rubs the blood on the face of Broomhilda von Shaft (Kerry Washington) to her horror, quite understandably.
After reviewing the takes, the one mentioned above was selected for the final cut (no pun intended). He later received stitches for the wound and mentioned it during several interviews.
Whenever I watch the movie, I wait for that scene in anticipation and am awed by the dedication of DiCaprio to his character as well as the reactions from the rest of that incredibly talented ensemble.
In Quentin Tarantino's Django Unchained, Calvin Candie (Leonardo DiCaprio) confronts Django (Jamie Foxx) and Dr. Schultz (Christoph Waltz) in arguably the most climactic scene of the entire movie. In a fit of rage, Candie slams his hand down on the table and accidentally crushes a wine glass, leaving a large gash in his hand. Without breaking character, DiCaprio finishes the scene with copious amounts of his blood dripping down his hand and arm. He even rubs the blood on the face of Broomhilda von Shaft (Kerry Washington) to her horror, quite understandably.
After reviewing the takes, the one mentioned above was selected for the final cut (no pun intended). He later received stitches for the wound and mentioned it during several interviews.
Whenever I watch the movie, I wait for that scene in anticipation and am awed by the dedication of DiCaprio to his character as well as the reactions from the rest of that incredibly talented ensemble.
Why does Germany still not have veto power in the UN, considering they are one of the world's leading economies?
The UN security council permanent members are composed of the major allies involved in WWII victory, i.e. the United States of America, the Soviet Union (replaced by the Russian Federation), the Republic of China (replaced by the People's Republic of China), France, and the United Kingdom. As Germany was part of the Axis, they did not have a UN security council permanent seat when the UN was formed, and there are no provisions in the United Nations Charter for changing the UN's structure.
And while it could make sense to give Germany a permanent seat, it would face much opposition as well:
It also opens a pandora's box of which other countries should have a permanent seat.
I would personally support a German permanent membership of the UN security council, but getting this done is fraught with problems.
And while it could make sense to give Germany a permanent seat, it would face much opposition as well:
- Germanophobia is in full swing in Europe even today as people from the PIGS (Portugal, Ireland, Greece, Spain) resent Germany's strong role in preserving the European Union. One wonders just how much it will be widespread if Germany assumes a bigger international presence.
- Germans themselves may not actually want a bigger international presence.
It also opens a pandora's box of which other countries should have a permanent seat.
- India did not exist as a country at the time of the UN's founding, and should have a seat for all kinds of reasons (economic, demographic, geographic)
- There is no African country with a permanent seat.
- There is no Latin American country with a permanent seat.
- Should France and the UK still have a permanent seat, as there are already 3 European countries with a permanent seat?
- There is no Muslim country with a permanent seat, should there be one? Turkey and Iran are likely the best candidate on paper, but are unlikely to ever be accepted by the World or by Muslims themselves. Indonesia could also be a good candidate on paper but it might also not be accepted.
I would personally support a German permanent membership of the UN security council, but getting this done is fraught with problems.
How far can artificial intelligence go?
I. J. Good and Vernor Vinge noted that if humans could produce smarter than human intelligence, then so could it, only faster. Good called this phenomena an intelligence explosion. Vinge called it a singularity. Ray Kurzweil extends Moore's Law to project that global computing capacity will exceed the capacity of all human brains (at several petaflops and one petabyte per person) in the mid 2040's. He believes that a singularity will follow shortly afterward. This assumes that global computing capacity (operations per second, memory, and network bandwidth) continues to double every 1.5 years, as it has been doing since the early 20'th century.
Current global computing capacity is about 10^19 operations per second (OPS) and 10^22 bits of memory, assuming several billion computers and phones. In 30 years, these should increase by 6 orders of magnitude. Ten billion human brain sized neural networks with 10^14 connections each at a few bytes per connection, running at 100 Hz, would require roughly 10^26 OPS and 10^26 bits.
It is hard to predict what will happen next because our brains are not powerful enough to comprehend a vastly superior intelligence. Various people have predicted a virtual paradise with magic genies, or a robot apocalypse, or advanced civilization spreading across the galaxy, or a gray goo accident of self replicating nanobots. Vinge called the singularity an event horizon on the future. We could no more comprehend a godlike intelligence than the bacteria in our gut can comprehend human civilization.
Nevertheless, physics (as currently understood) places limits on the computing capacity of the universe. Flipping a qubit in time t requires energy h/2t, where h is Planck's constant, 6.626 x 10^-34 Joule seconds. Seth Lloyd, in Computational capacity of the universe, estimates that if all of the mass of the universe (about 10^53 Kg) were converted to 10^70 J of energy, it would be enough to perform about 10^120 qubit flip operations since the big bang 4 x 10^17 seconds ago (13.8 billion years). This value roughly agrees with the Bekenstein bound of the Hubble radius, which sets an upper bound on the entropy of the observable universe of 2.95 x 10^122 bits.
Writing a bit of memory, unlike flipping a qubit, is a statistically irreversible operation, which requires free energy kT ln 2, where T is the temperature and k is Boltzmann's constant, 1.38 x 10^-23 J/K. Taking T to be the cosmic microwave background temperature of 3 K, the most we could store using 10^70 J is about 10^92 bits. This roughly agrees with Lloyd's estimate of 10^90 bits, which he calculated by estimating the number of possible quantum states of all 10^80 atoms in the universe.
If we restrict our AI to the solar system and captured all of the sun's output of 3.8 x 10^26 W using a Dyson sphere with radius 10,000 AU and temperature 4 K, then we could perform 10^48 OPS (bit writes per second). To put this number in perspective, the evolution of human civilization from dirt 3.5 billion years ago required 10^48 DNA base copy operations and 10^50 RNA and amino acid transcription operations on 10^37 DNA bases over the last 10^17 seconds. Thus, our computer could simulate the evolution of humanity at the molecular level in a few minutes, a speedup of 10^15. Anything faster would require interstellar travel or speeding up the sun's energy output, perhaps by dropping a black hole into it. (A naive extrapolation of Moore's Law suggests this will happen in the year 2160, 75 years after we surpass the computing power of the biosphere.)
Note: to estimate the computational power of evolution, I am assuming 5 x 10^30 bacteria with a few million DNA bases each, and a similar amount of DNA in other organisms. I am assuming a replication time of 10^6 seconds per cell, and that DNA replication makes up 1% of cell metabolism. See also An Estimate of the Total DNA in the Biosphere.
Current global computing capacity is about 10^19 operations per second (OPS) and 10^22 bits of memory, assuming several billion computers and phones. In 30 years, these should increase by 6 orders of magnitude. Ten billion human brain sized neural networks with 10^14 connections each at a few bytes per connection, running at 100 Hz, would require roughly 10^26 OPS and 10^26 bits.
It is hard to predict what will happen next because our brains are not powerful enough to comprehend a vastly superior intelligence. Various people have predicted a virtual paradise with magic genies, or a robot apocalypse, or advanced civilization spreading across the galaxy, or a gray goo accident of self replicating nanobots. Vinge called the singularity an event horizon on the future. We could no more comprehend a godlike intelligence than the bacteria in our gut can comprehend human civilization.
Nevertheless, physics (as currently understood) places limits on the computing capacity of the universe. Flipping a qubit in time t requires energy h/2t, where h is Planck's constant, 6.626 x 10^-34 Joule seconds. Seth Lloyd, in Computational capacity of the universe, estimates that if all of the mass of the universe (about 10^53 Kg) were converted to 10^70 J of energy, it would be enough to perform about 10^120 qubit flip operations since the big bang 4 x 10^17 seconds ago (13.8 billion years). This value roughly agrees with the Bekenstein bound of the Hubble radius, which sets an upper bound on the entropy of the observable universe of 2.95 x 10^122 bits.
Writing a bit of memory, unlike flipping a qubit, is a statistically irreversible operation, which requires free energy kT ln 2, where T is the temperature and k is Boltzmann's constant, 1.38 x 10^-23 J/K. Taking T to be the cosmic microwave background temperature of 3 K, the most we could store using 10^70 J is about 10^92 bits. This roughly agrees with Lloyd's estimate of 10^90 bits, which he calculated by estimating the number of possible quantum states of all 10^80 atoms in the universe.
If we restrict our AI to the solar system and captured all of the sun's output of 3.8 x 10^26 W using a Dyson sphere with radius 10,000 AU and temperature 4 K, then we could perform 10^48 OPS (bit writes per second). To put this number in perspective, the evolution of human civilization from dirt 3.5 billion years ago required 10^48 DNA base copy operations and 10^50 RNA and amino acid transcription operations on 10^37 DNA bases over the last 10^17 seconds. Thus, our computer could simulate the evolution of humanity at the molecular level in a few minutes, a speedup of 10^15. Anything faster would require interstellar travel or speeding up the sun's energy output, perhaps by dropping a black hole into it. (A naive extrapolation of Moore's Law suggests this will happen in the year 2160, 75 years after we surpass the computing power of the biosphere.)
Note: to estimate the computational power of evolution, I am assuming 5 x 10^30 bacteria with a few million DNA bases each, and a similar amount of DNA in other organisms. I am assuming a replication time of 10^6 seconds per cell, and that DNA replication makes up 1% of cell metabolism. See also An Estimate of the Total DNA in the Biosphere.
Why is it when there is a humanitarian crisis like Ebola, it's America and the West who send aid, we don't hear of China, India or other BRICs countries sending aid yet they have just as much to lose if the virus gets out of hand?
You are wrong. In the Ebola fight, most of the western nations are way behind.
Cuba has been the leading country on the ground in the past several month. For an island nation of 11 million people, it has 471 physicians on the ground. France (through médecins sans frontières) has 250 physicians on the ground. China has over 200.
According to IMC, the U.S. has less than 10 doctors registered to volunteer to fight Ebola. Cuba leads fight against Ebola in Africa as west frets about border security.
The US has pledged $400 million in aid, which is great. But right now, there are literally hundreds of millions of dollars sitting there and WHO can't find people to use them. At the end of the day, you need people to treat the disease, and it's hard to recruit volunteers to deal with a disease with 50% fatality rate.
If you do some research in this area, you will see that Cuba is by far the leading medical-care contributor in the world. They often drop a couple thousand doctors to disaster areas to help treat the patients and train the local doctors. It's their tradition and something the Cubans really believe in. independent.co.ukCuban medics in Haiti put the world to shame
Cuba has been the leading country on the ground in the past several month. For an island nation of 11 million people, it has 471 physicians on the ground. France (through médecins sans frontières) has 250 physicians on the ground. China has over 200.
According to IMC, the U.S. has less than 10 doctors registered to volunteer to fight Ebola. Cuba leads fight against Ebola in Africa as west frets about border security.
The US has pledged $400 million in aid, which is great. But right now, there are literally hundreds of millions of dollars sitting there and WHO can't find people to use them. At the end of the day, you need people to treat the disease, and it's hard to recruit volunteers to deal with a disease with 50% fatality rate.
If you do some research in this area, you will see that Cuba is by far the leading medical-care contributor in the world. They often drop a couple thousand doctors to disaster areas to help treat the patients and train the local doctors. It's their tradition and something the Cubans really believe in. independent.co.ukCuban medics in Haiti put the world to shame
What is a 1up from microsoft paint, but still free & simple?
I second John Colagioia's view -- GIMP is free but not simple. It's very powerful, but awkward to learn, and unless something has changed it starts up incredibly slowly. I also recommend Paint.NET, which is not available at http:// paint dot net, but is available at getpaint.net. It's a good step up from Microsoft Paint.
Subscribe to:
Posts (Atom)