### Breaking NewsVisit Yahoo! News for the latest.

×Close this window

complexityweblog · Computational Complexity Weblog

#### Group Information

• Members: 121
• Category: Algorithms
• Founded: Jan 13, 2003
• Language: English

#### Yahoo! Groups Tips

##### Did you know...
Message search is now enhanced, find messages faster. Take it for a spin.

### Messages

Messages Help
 Messages 1173 - 1202 of 1688   Oldest  |  < Older  |  Newer >  |  Newest
Messages: Show Message Summaries Sort by Date
#1173 From: GASARCH <gasarch@...>
Date: Wed Oct 1, 2008 3:48 pm
Subject: [Computational Complexity] When does Alice deserve to be a co-author?

Send Email

 When has someone done enough work to deserve a co-authorship? Here are a few scenarios I have seen. I will say what did happen, not what should have happened. Alice comes up with the problem and makes some obvious observations about it, she shows it to others, who solve it. She writes it all up. (Alice got the co-authorship.) Alice comes up with the problem and makes some obvious observations about it, she shows it to others, who solve it. She does not write it up but does some proofreading. (I've seen this twice- once she got the co-author, once not.) Bob shows Alice a problem, Alice makes one observation that is the key. She has put only 2 minutes of work into the paper. (Alice got the co-authorship.) Bob has already written up the 50 page paper and it has covered all of the cases except for one. Alice solves the one case left. It was not a hard case at all. (Alice did not get the co-authorship.) Bob and Carol solve the problem together. Alice is in the room and does not contribute anything, but her tenure case is coming up soon and she is a bit short on papers. (Alice got the co-authorship. This is a rare scenario.) Alice proofreads a paper and finds a serious flaw in it, but fixes tht flaw. It was a rather subtle flaw and needed very clever argument to fix it. (Alice got the co-authorship.) Alice reads Bob's paper and comes up with a generalization that Bob does not care about, but wouldn't really be worth a paper on its own. (The paper did not have the generalization and Alice did not get the co-authorship.) Alice makes a key observation that makes all of the proofs shorter. The Conference version has her as a co-author. However, while writing up the journal version it is noticed that Alice's contribution was not correct. It is removed and the journal version is pretty much what it would have been had Alice not been involved at all. (Alice asked to be taken off of the journal version The other authors were surprised by this and did not mind keeping her as an author. They pointed out that there were other people on the paper that were even less involved. But they agreed to take her off on her insistence.) -- Posted By GASARCH to Computational Complexity at 10/01/2008 10:48:00 AM

#1174 From: Lance <fortnow@...>
Date: Thu Oct 2, 2008 11:54 am
Subject: [Computational Complexity] Go Sox

Send Email

 After beating three different teams in three days to make the playoffs, my White Sox join the Cubs in an historic time in Chicago: The first time both Chicago baseball teams are in the postseason since that great 1906 World Series. The Sox open the playoffs today against the surprising AL East winner Tampa Bay. With the Boston Red Sox that makes all three teams that I have once held season tickets (I lived walking distance from Wrigley for a year and I organized season tickets for the theory group at MIT as a student there). And add on Milwaukee a team I have grown to like as they play in new stadium that I can get to easier than either of the Chicago ball fields and you can't beat the sausage race. And four more teams that hopefully will all be eliminated in the first round. This is just the ultimate playoff year to keep me distracted from various fiscal crises and political activities. I might not get much research done in October. The eyes of America look at the Cubs who last won a World Series in 1908, one hundred years ago. But although I now teach north of the city, my heart remains with the Southsiders. The best outcome: The White Sox win the world series in five games. Why five and not the full seven? Because it is so much more fun to see the Cubs choke at home. -- Posted By Lance to Computational Complexity at 10/02/2008 06:53:00 AM

#1175 From: GASARCH <gasarch@...>
Date: Fri Oct 3, 2008 2:38 pm
Subject: [Computational Complexity] A minor but perhaps indicative failure of technology

Send Email

 In Feburary of 2007 needed to look up on Hallmark Channel Website if they had some program changes. They did not have any program changes on the website but they did have a number to call. The subwebsite had the number 1-888-390-7474. I called it and it said: You have reached viewer services at the Hallmark Channel. In order for us to better serve our viewers, this mailbox is specialized designed to answer most of your questions. If you are having trouble viewing the Hallmark Channel, press 1. For a list of program changes, press 2. I thought GREAT!. I pressed 2. The following changes have been made to our program scheduled as of Wedensday June 29, 2005. Jane Doe-The Harder they Fall, originally scheduled to air Friday July 1, 2005 9:00 Eastern and Pacific, 8:00PM Central, and 7:00PM Mountain time, has been replaced by Matlock-The Power Brokers. They had not updated their phone message in two years. I left a message (If you would like to leave a comment, press 4) telling them of their error. I have since called on April 1, May 1, and June 1 of 2007 to see if they have corrected it. They had not. I've also emailed them, to no effect. (Its been fixed since then.) How could a phone line be this out of date? How could the website maintainers not know this? They got careless and are not suffering for it so they don't know they got careless. I'm not going to stop watching their shows because their website gives a phone number that is two years out of date. Lack of coordination- the right hand does not that the left hand stopped working two years ago. For this particular issue, the breakdown of technology is not important. But it troubles me that in other domains it might be. -- Posted By GASARCH to Computational Complexity at 10/03/2008 09:37:00 AM

#1176 From: GASARCH <gasarch@...>
Date: Mon Oct 6, 2008 3:08 pm
Subject: [Computational Complexity] The Unexpected Hanging Paradox-any seriosu math th...

Send Email

 (20th Maryland TCS Day Tue, Oct 14, 2008 9:30am - 4pm) The liar paradox is very similar to the serious math of Godel's Incompleteness theorem. Berry's Paradox is very simlar to the serious math of the Kolmogorov function (map x to the length of the shortest description of x) being noncomputable. Is their any serious math that is similar to the Unexpected Hanging Paradox? Here is the paradox: A judge tells a condemned man that he will hang on either M, Tu, W, Th, or Friday at 1:00PM And that he will be surprised when it happens. His lawyer tells him not to worry: The hanging can't happen on Friday, since if he is alive by Thursday at 1:01PM then he knows it will be on Friday and hence won't be surprised. However, he also knows he can't be hung on Thursday since Friday is out, so if by Wedensday at 1:01PM he alive he will KNOW that it is Thursday. Hence he won't be surprised. Hence it can't be Thursday. Proceeding like this he reasons that he can't be hung. On Thursday the judge comes with the noose- and the prisoner is surprised. I'm not asking for a resolution-- it seems to be a significant problem for philosophy-- however, I am asking: Is there some serious math that is similar to this paradox? -- Posted By GASARCH to Computational Complexity at 10/06/2008 10:06:00 AM

#1177 From: Lance <fortnow@...>
Date: Tue Oct 7, 2008 11:30 am
Subject: [Computational Complexity] Expectation of Expectations

Send Email

 Can we quantitatively judge the effect of a debate on the election? One reasonable approach would look at prediction markets. If the value of the security for Obama to win would greatly increase after the debate in the absense of significant other factors, one could conclude that the debate went well for Obama, or at least that he did better than expected. Intrade, the Irish real-money prediction markets site, went one further and set up a new security 2ND.DEBATE.OBAMA based on the outcome of tonight's presidential debate. In short, This contract will be settled according to which Presidential Candidate receives the largest increase in value following the Presidential Debate on Tuesday October 7th. The pre-debate value will be calculated for 2008.PRES.OBAMA and 2008.PRES.McCAIN and then compared with the post-debate value for each of these contracts. This contract will expire at 100 if the value of 2008.PRES.OBAMA increases more than the value of 2008.PRES.McCAIN. This contract will expire at 0 if this is not the case. A similar security for the VP debate paid off zero as there was a slight bump for McCain suggesting Palin did better than expected. But there is a flaw in the logic for such a security—The market should already take into account the expectation of the debate. In general for any random variable A, the expectation of the expectation of A is the same as the expectation of A, i.e., E(E(A))=E(A). So, assuming that these securities represent real probabilities, should the value of 2ND.DEBATE.OBAMA before the debate be 50? Not quite. Suppose that there is a 75% chance that the Obama stock goes up 5 and a 25% chance it drops 15. Then the price of 2ND.DEBATE.OBAMA should be 75%. The price of the security indicates who has the least chance of major downside from the debate. But to think one could use this security to predict how well the debate will go makes little sense. The press release suggests using this security for real-time debate tracking. The price of 2ND.DEBATE.OBAMA should gyrate dramatically on small changes of 2008.PRES.OBAMA. But one could just derive this directly from 2008.PRES.OBAMA and since the later security has much greater liquidity it will likely give better information. Chris Masse suggested another explanation. [Intrade] is trying to induce traders into trading more (so as to get more transaction fees), and this new contract offers more upsides for the traders who will bet right. One could double one's initial investment in a matter of 2 days. In which case all the power to them. -- Posted By Lance to Computational Complexity at 10/07/2008 06:29:00 AM

#1178 From: GASARCH <gasarch@...>
Date: Wed Oct 8, 2008 3:54 pm

Send Email

 (This is not a partisan post. I am using a Republican example since it illustates the point so well.) In most (all?) serious fields of academia it is not acceptable to directly contradict yourself. No so in the field of political pundits. Karl Rove (and others) recently did this as the following clip illustrates. My question is, why do they get away with it? Some thoughts. I only saw this on The Daily Show. No other media seems to have picked up on it. (Some blogs did.) Karl Rove will not be challenged on this. FOX NEWS will not call him into their office and ask him how he could say contrary things. Rove is only talking to fellow conservatives. Its like Cold Fusion worskhop or a Parapsychology conference or an Intelligent design journal or a Bush Rally- your audience is pre-picked. People think (perhaps correctly) that all pundits do it, so nobody is particularly criticized if they do it. (I've seen this reasoning used to defend negative ads.) If Karl Rove was challenged he might say The Daily Show ran that piece because they are part of the liberal media elite. Most people have not had a course in basic logic to tell them when two statements are clearly contradictory. (Though common sense should suffice.) There is an end justifies the means mentality. If Karl Rove helps get McCain elected, then that is all that matters. Is there any serious field of academia where people can make contrary statements and not be called on it? -- Posted By GASARCH to Computational Complexity at 10/08/2008 10:53:00 AM

#1179 From: Lance <fortnow@...>
Date: Thu Oct 9, 2008 1:05 pm
Subject: [Computational Complexity] A Victim of the Internet

Send Email

#1180 From: GASARCH <gasarch@...>
Date: Fri Oct 10, 2008 3:15 pm
Subject: [Computational Complexity] Patents and Copyrights vs Web Traffic (guest Post ...

Send Email

 (Guest post by Amir Michail) Title: Patents and Copyright vs Web Traffic Software patents are intended to reward innovation, especially by startups with limited resources. But in today's Web 2.0 world of social sites such as twitter, a service is more useful when it has more users. Even if the implementation contains something that is worthy of a patent, getting such a patent is not particularly important because the service will generally have an overwhelming advantage over its clones in terms of number of users -- new users will generally pick the service with the most users, thus resulting in much more web traffic growth for the original service. Similarly, one could argue that copyright is unnecessary on the web -- those who copy are unlikely to get anywhere near as much web traffic as the original source. For example, someone may copy posts verbatim from a popular blog, but it is unlikely that he/she will get anywhere near as much traffic. In particular, people already linked to the original source, thus giving it higher PageRank. Admittedly, there are occasions where things don't work out like this and where patents and/or copyright would have been helpful. Consequently, we could have a law that requires search engines to provide a link back to the original source if any for each search result. This would be done using a completely automated heuristic matching algorithm. In this way, those who copy are even less likely to get anywhere near as much traffic as the original source. -- Posted By GASARCH to Computational Complexity at 10/10/2008 10:13:00 AM

#1181 From: GASARCH <gasarch@...>
Date: Mon Oct 13, 2008 3:48 pm
Subject: [Computational Complexity] Thanks for better upper bound. Still want better l...

Send Email

 I posted on the max problem a while back. Alice has x, an n-bit integer. Bob has y, an n-bit integer. They want to both know, max(x,y). This can be done with a deterministic n + \sqrt{2n} + O(1) bits. Can they do better? Anon Comment 5 on that post, and possibly a paper that was pointed to, lead to a Deterministic Protocols that took n+O(log(n)) bits. I am presenting it carefully in this post, hoping that someone will NOW prove the lower bound OR get an even better upper bound. When I had the upper bound of n +\sqrt{2n} + O(1) I thought it was optimal. Now that I have the upper bound of n+O(log n) + O(1) I'm not so sure. This is a good example of why lower bounds are so interesting- someone clever may come along with a better algorithm-- How to you prove that they can't? Alice has x, Bob has y. L is a paramter to be determined later. We will assume L divides n. Let m=L/n. Alice views x as x1...xm where each xi is length L. Bob views y as y1...ym where each yi is length L. Alice sends to Bob a string A of length L that is NOT any of x1,...,xm. Bob sends Alice a string B of length L that is NOT any of y1,...,ym. Note: we will need 2L > n/L. So long as it looks like the strings are identical They alternate sending the next block: Alice sends x1, Bob sends y2 Alice sends x3, Bob sents y4 (NOTE- nobody has to send a bit saying we are tied'.) They keep doing this until one of them notices that the strings are NOT equal. If they they never discover this then they spend n+Lb bits and discover x=y. They both know max since its x=y. If Bob notices that xi > yi then he will send B0. Alice knows that B can't be one of Bob's segments, so she knows why Bob send this. She will then send the rest of her string that she has not already send. They both know max(x,y). This took n+3L bits. If Bob notices that xi yi then he will send B1. Alice knows that B can't be one of Bob's segments, so she knows why Bob send this. Bob will then send the rest of his string. They both know max(x,y). This took n+4L bits. What can L be? we needed 2L > L/n. L=lg n works. So this works in n+4lg n. You can do slightly better by taking L=lg n - lglg n. -- Posted By GASARCH to Computational Complexity at 10/13/2008 10:47:00 AM

#1182 From: Lance <fortnow@...>
Date: Tue Oct 14, 2008 11:28 am
Subject: [Computational Complexity] The Fiscal Crisis

Send Email

 Sitting in our Ivory Towers, our day-to-day lives don't change much even when dramatic events happen in the "real world." Even with the current economic crisis my basic job of teaching and research much go along as they have before. Whether P = NP does not depend on the GDP (though the converse might be false). But in the end it does all come down to money. University endowments have shrunk. Alumni donations will surely drop. Less tax revenues will hurt government support of public universities. Industrial research labs may also take a hit if their corporate parents have financial difficulties. Schools like Northwestern have structured their finances so they can weather short term shocks in the economy but a prolonged recession or depression will start to take its toll. New faculty hiring will likely be the first victim in universities. Tenured professors may delay their retirement after taking a hit in their CREF accounts holding up slots for new hires. Unversities worried about their long-term finances may also slow down opening up new positions. On the other hand we should see increased enrollments on every level which may push the need to hire more CS faculty. Computer Science has lost its lustre in recent years, but still reliably produces jobs and there is a flight to safe majors in times of economic turmoil. People who have trouble finding a good job often go back and get their Master's while they wait out the recession. We should also get more people interested in Ph.D.s as alternatives dry up. My sources told me we have seen a major drop in Indian IIT graduates going on to Ph.D.s because banks have offered them high salaried positions. With the banking industry taking the biggest hit, we welcome those IITers back to academics. Finally in every crisis, we look for someone to blame, and some blame the algorithms. Somehow the genius quants – the best and brightest geeks Wall Street firms could buy – fed $1 trillion in subprime mortgage debt into their supercomputers, added some derivatives, massaged the arrangements with computer algorithms and – poof! – created$62 trillion in imaginary wealth. It's not much of a stretch to imagine that all of that imaginary wealth is locked up somewhere inside the computers, and that we humans, led by the silverback males of the financial world, Ben Bernanke and Henry Paulson, are frantically beseeching the monolith for answers. Or maybe we are lost in space, with Dave the astronaut pleading, "Open the bank vault doors, Hal." -- Posted By Lance to Computational Complexity at 10/14/2008 06:28:00 AM

#1183 From: GASARCH <gasarch@...>
Date: Wed Oct 15, 2008 3:52 pm
Subject: [Computational Complexity] Theory Day at University of Maryland

Send Email

 University of Maryland had its 20th THEORY DAY. It was organized by Samir Khuller Samir Khuller and Azarakhsh Malekian. There was a theme! All of the talks were on Game Theory/Social Networks/Auctions/Internet stuff. Most theory days do not have a theme, but this one was organized in a different way- Azar knew that some of these people would be in town for INFORMS. Having a theme was GOOD. Usually at theory day (anywhere) I have to get my head into a certain mode of thinking and then it changes with every talk. Here my head was in the same mode all day. One odd thing- I got 4 short introductions to Game Theory'. Jason Hartline and Nicole Immorlica gave talks. I had never met them before so I got to see if they looked like their images in the this famous poster. They looked more like themselves then Lance did, but not much. Then Jason turned his back and bent it a bit and THEN he looked like the poster. Here are the talks: Mohammad Mahdian. Yahoo! Research. Externalities in Online Advertising. Nicole Immorlica. Northwestern University. The Role of Compatibility in Technology Diffusion on Social Networks. Konstantinos Daskalakis. Microsoft Research. Computing Equilibria in Large Games. Jason Hartline. Northwestern University. Vahab Mirrokni. Google. Submodular Optimization: Maximization, Learning, and Applications. Amin Saberi. Stanford University. Game Dynamics, Equilibrium Selection and Network Structure. They all gave excellent talks. Why was that? Well, for one thing, they were excellent speakers. But also this is a relatively young field so the work is still close to the motivation (there are fields of math where the motivation has been forgotten over time!). And the motivation is itself easy to explain. The conference was intentionally the same time as INFORMS so that people who were going there anyway could give talks and/or attend our Theory Day. Kudos to Samir and Azar for organizing it! -- Posted By GASARCH to Computational Complexity at 10/15/2008 10:49:00 AM

#1184 From: Lance <fortnow@...>
Date: Thu Oct 16, 2008 12:25 pm
Subject: [Computational Complexity] How Much is a Proof?

Send Email

 Back in our grad schools days, Noam Nisan told me he preferred incomplete proof write-ups in papers. By being forced to fill in the missing steps, he could really understand what was going on in the proof. I don't agree with Noam. I shouldn't have to struggle to figure out how the author went from point A to point B. I've spent far too many hours trying to understand the logical jump when the author says Clearly'' or Obviously''. On the other hand I don't want the authors to spell out every small algebraic manipulation either. And it's just completely infeasible to give a fully formal logical proof of even the simplest theorems. So what level of proof should one give? As a general rule you should write for the reader. What would make it easier for him or her to understand your paper? When you leave out some details are you doing that because it would clutter your proof or because you are trying to save yourself some time. If it is the latter you do no one any favors. Don't make assumptions of your readers. If you use a supposedly well-known'' technique, then either spell it out or make it clear what you are doing with a reference for those of us unfamilar with such techniques. And how many times have you read the full details will appear in the final version'' where there are no later versions? Put those details in now. If you hit a proceedings page limit, have a full paper on the web with a footnote in the conference version pointing there. If you do have a technically messy proof, the technicalities often overshadow the very pretty new ideas you needed for the proof. Be sure to also give a proof sketch that brings those ideas to light. But mostly the Golden rule applies—Write your proofs as you would like others to write proofs for you. -- Posted By Lance to Computational Complexity at 10/16/2008 07:24:00 AM

#1185 From: GASARCH <gasarch@...>
Date: Fri Oct 17, 2008 3:36 pm
Subject: [Computational Complexity] Finding the ith largest of n numbers for i,n SMALL

Send Email

 Exactly how many comparisons does it take to find the ith largest of n numbers? For i=1 and i=2 these numbers are known exactly. Beyond that I'm not sure, though I do know that we don't know (say) how many comparisons to find the 15th largest element out of 30. (There is a table in KNUTH, but I could not find a website of these things. If someone knows one, please comment.) The following conjecture, if true, would help speed up computer searches for such algorithms and may lead to interesting math in and of itself. Let V(i,n) be the min number of comparisons (worst case) to find the ith largest of n numbers. Conjecture: There is an algorithm for finding the ith of n numbers that uses V(i,n) comparisons that begins by comparing x1 to x2, x3 to x4, x5 to x6, ..., xn-1 to xn (if n is even, else end at xn-2 to xn-1). A weaker conjecture may be more tractable, where we allow V(i,n) + 1 or + some constant. -- Posted By GASARCH to Computational Complexity at 10/17/2008 10:35:00 AM

#1186 From: GASARCH <gasarch@...>
Date: Fri Oct 17, 2008 3:39 pm
Subject: [Computational Complexity] Finding the ith largest of n numbers for i,n SMALL

Send Email

 Exactly how many comparisons does it take to find the ith largest of n numbers? For i=1 and i=2 these numbers are known exactly. Beyond that I'm not sure, though I do know that we don't know (say) how many comparisons to find the 15th largest element out of 30. (There is a table in KNUTH, but I could not find a website of these things. If someone knows one, please comment.) The following conjecture, if true, would help speed up computer searches for such algorithms and may lead to interesting math in and of itself. Let V(i,n) be the min number of comparisons (worst case) to find the ith largest of n numbers. Conjecture: There is an algorithm for finding the ith of n numbers that uses V(i,n) comparisons that begins by comparing x1 to x2, x3 to x4, x5 to x6, ..., xn-1 to xn (if n is even, else end at xn-2 to xn-1). A weaker conjecture may be more tractable, where we allow V(i,n) + 1 or + some constant. -- Posted By GASARCH to Computational Complexity at 10/17/2008 10:38:00 AM

#1187 From: GASARCH <gasarch@...>
Date: Mon Oct 20, 2008 3:47 pm
Subject: [Computational Complexity] An interesting Summation

Send Email

 (Guest post by Richard Matthew McCutchen.) The formula below appears in The TeXbook as a typesetting exercise (I have slightly modified it): The expression has a clever mathematical meaning that is not discussed in the book. Try to find it and prove it! Let p(n) be the limit as m &rarr &infin, m &isin Z, of &sumk=0...&infin (1- cos{2m}(k!n&pi/n)) (Comment from Bill: &pi is supposed to be pi, the ratio of circumference to diameter of a circle. html doesn't really do a good pi- if someone knows how to, in html, do a better one- let me know.) -- Posted By GASARCH to Computational Complexity at 10/20/2008 10:45:00 AM

#1188 From: Lance <fortnow@...>
Date: Tue Oct 21, 2008 11:13 am
Subject: [Computational Complexity] World Series at FOCS

Send Email

 The timing of the FOCS conference in late October often overlaps with the World Series and on rare occasions they happen in the same city at the same time. Checking back through the 48 previous FOCS only once before in 1997 FOCS was in Miami Beach October 19-22 with Game 2 of the world series October 19th with the Florida Marlins losing to Cleveland at Pro Player Stadium in Miami. A couple of other close calls. In 1992, I had tickets to a potential series in Pittsburgh but I've told that sad story before. In 1999, FOCS was held in New York October 17-19 and the Yankees hosted Atlanta a week later in the series. Last year FOCS was held in Providence October 20-23 and just a short drive up I-95 the Red Sox beat the Rockies twice October 24-25. This year we have a perfect overlap. The 2008 FOCS Conference runs October 25-28 in downtown Philadephia and Games 3, 4 and 5 (if necessary) of the Series between the Phillies and the Tampa Bay Rays will be held October 25-27 three miles south in Citizens Bank Park. But I'll miss the fun and won't make FOCS this year. I'd love to hear from any FOCS attendee that manages to snag World Series tickets and gets to enjoy the two great fall classics. -- Posted By Lance to Computational Complexity at 10/21/2008 06:12:00 AM

#1189 From: GASARCH <gasarch@...>
Date: Wed Oct 22, 2008 1:07 pm
Subject: [Computational Complexity] Weird Sum/ith largest/Wallet-- revisited

Send Email

 In Richard Matthew McCutchen's Guest post he challenged the readers to figure out a certain sum. And I challenged them to help me do a better pi in html. KKMD (comment 3) is correct, it is the largest prime that divides n. The formal proof is here. pi: I originally used an followed by pi which yields &pi pi: I was supposed to use followed by pi and then a semicolon which yields π pi: Lance says to use < span style=" font-family:times" > & pi; which yields π Some of the comments from the posting on ith largest of n inspire some random thoughts from me: Why on earth would anyone be doing a computer search for such algorithms? (for algorithms to find ith largest of n with as few comparisons as possible). One hope is that with enough empiricial evidence we may get EXACT values for how many comparisons it takes. Also, for the challenge! But YES, limited practical value. But see next point. Why do you think your conjecture is true? The known algorithms for finding ith largest of n take n+(i-1)log n + O(1) and begin by making comparisons pairwise. For i small this is optimal up to the O(1). So in this realm it seems likely. But what is i small'? When finding the 10th largest out of 40 is that more like i small or like you are finding the n/4th largest element? Don't know- want to find out. Another reason to do this- when is small small? A commenter says there is interesting info in a Tech Report that may be hard to find. The notion of a Tech Report that is hard to find may be unfamiliar to young people. With the web it may be easier to find some unpublished papers then some published ones, depending on who the authors are and the journals are. (If someone knows where the Journal version of Barrington's paper on Bounded Width BP containing NC1 is online please let me know. Barrington does not have it on his website or know where it is online.) Lance recently recommended a certain wallet in this blog. On his advice I bought it and its great. What I wonder is, how big a mover and shaker is Lance? Should they have given him a free wallet since he influences others? How many others have bought it based on his recommendation? At Maryland Theory Day there was a talk about how sellers should give people of influence discounts since they will influence others to buy their product. This is not a new idea, but with modern technology it can be better targeted. -- Posted By GASARCH to Computational Complexity at 10/22/2008 08:05:00 AM

#1190 From: Lance <fortnow@...>
Date: Thu Oct 23, 2008 10:45 am

Send Email

 In Maureen Dowd's column yesterday mentioned that Obama was being accused of reading a book about the end of America written by a fellow'' Muslim. Turns out I read the same book, Fareed Zakaria's The Post-American World. Not so much about the end of America, but the gradual ending of America's economic dominance in the world. Zakaria contrasts America today with the rise and fall of the British empire. The book focuses on China and India, their recent rise and the challenges each faces, as well as suggestions for America to keep their competitive edge. At the University level, Zakaria seems quite bullish on America especially in CS. The situation in the sciences is particularly striking. A list of where the world's 1000 best computer scientists were educated shows that the top ten schools are all American. US spending on R&D remains higher than Europe's, and its collaborations between business and education institutions are unmatched anywhere in the world. American remains by far the most attractive destination for students taking 30 percent of the total number of foreign students globally. All these advantages will not be erased easily, because the structure of European and Japanese university—mostly state-run bureaucracies—is unlikely to change. And while China and India are opening new institutions, it is not that easy to create a world-class university out of whole cloth in a few decades. Here's a statistic about engineers that you might not have heard. In India, universities graduate between 35 and 50 Ph.D.'s in computer science each year; in America, the figure is 1,000. We have seen some countries like Israel create world-class universities out of whole cloth in a few decades. The US did it themselves in the late 19th century. So China and India could have dramatical success at the university level if they make the commitment to resources and change. Until recently the Indians and Chinese have come in large numbers to our graduate programs and have just stayed in the US. Now, for may reasons not the least of which is the improving economic conditions in both countries, we are seeing more researchers heading back to their native countries whether it be Turing Award winner Andy Yao or just a large number of Indian scientists that are moving or plan to move back to India. Imagine the changes we've seen at Israeli universities in a country with a 10-digit population size. -- Posted By Lance to Computational Complexity at 10/23/2008 05:22:00 AM

#1191 From: GASARCH <gasarch@...>
Date: Fri Oct 24, 2008 8:21 pm
Subject: [Computational Complexity] Who to vote for?

Send Email

 I have not blogged much on the election because I doubt I can say anything you haven't already heard. I have gathered here reasons for/against the candidates that you might not have heard. They are NOT mine- they are things I've heard or read. If 1/2 of these comments are new to 1/2 of my readers, I'll be happy. REASONS TO VOTE FOR OBAMA OR AGAINST MCCAIN I've always wanted a Prez younger than me! Joe Biden is one of the few non-millionaires in the Senate. Hence he understands the working man. If Obama wins then Hillary probably won't be able to run for 8 years. (If McCain wins she can run in 4 years.) In the first debate Obama showed that he was willing to agree with McCain on some things. McCain did not. McCain does not know how to use email. I have two bets that Obama will win. (One is with an African-American who thinks the country will never elect an African-American Prez. She hopes she is wrong and will gladly lose the bet.) The McCain-Palin campaign spent $150,000 on clothes for Sarah Palin. This shows she has no understanding of economics. REASONS TO VOTE FOR MCCAIN OR AGAINST OBAMA I've always wanted a Prez older than my grandparents! Joe Biden is one of the few non-millionaires in the Senate and hence has no understanding of Economics. (Someone at my church told me this.) I live in the same Condo as John McCain. He voted for me for President of the Condo Association, so I feel I owe it to him to vote for him for President of America. Sarah Palin is a hockey mom with 5 kids and a job. Hence she understands the working mom. Obama is a Muslim who has been attending Reverend Wrights Christian Church for 20 years. (NOTE- 10% of Americans things Obama is Muslim. 1% think he is Jewish.) An intelligent man who I trust agrees with McCain on many things. That man: Barak Obama (First Debate and a few other things). The Republicans created the mess we're in now, make them clean it up. REASONS TO VOTE AGAINST BOTH OF THEM I'm from a non-swing state so my vote does not matter. The Daily Show, The Colbert Report, Saturday Night Live, and The Capitol Steps won't have much to work with. Obama, McCain, and Biden just aren't that funny. Palin makes political satire redunant. (Alaska is next to Russia and therefore she has Foreign Policy Experience.- is that John McCain? Jon Stewart? John McCain on Jon Stewart's show?) Who do the Assoc of American Political Satirists endorse? Having read this I still can't tell. -- Posted By GASARCH to Computational Complexity at 10/24/2008 03:20:00 PM #1192 From: Lance <fortnow@...> Date: Sat Oct 25, 2008 11:29 am Subject: [Computational Complexity] Knuth Prize to Strassen Send Email  The ACM announced that Volker Strassen will receive the Knuth Prize for his algorithmic work, most notably on primality, matrix multiplication and integer multiplication. The Knuth Prize is given for major contributions over an extended period of time. The winner gives a Knuth Prize lecture and Strassen will give his at SODA. The Strassen matrix multiplication is a simple but amazing recursion to beat the easy cubic bound. And the probabilistic algorithm for primality with Solovay drove complexity research into probabilistic computation and helped make modern cryptography possible. -- Posted By Lance to Computational Complexity at 10/25/2008 06:16:00 AM #1193 From: Lance <fortnow@...> Date: Sun Oct 26, 2008 12:59 pm Subject: [Computational Complexity] Sampling FOCS Send Email  Rahul Santhanam reports from Philadelphia. I'm in Philly for FOCS, and will be reporting on the extravaganza for this blog, including the drama and suspense of the talks ("Will there be heckling?" "Will he make an Obama joke?" "Will she break the 20-minute barrier?") and the drunken revelry of the business meeting. Typically, I have a few simple rules for attending talks: No conflicts with my regular 4 am - noon bedtime. Attending a double-digit number of talks is just as good as attending no talks at all. The title better have the word "complexity" in it. But with my reporting duties as an excuse, I've resolved to discard my old inertial lifestyle and tax my brain with the new and unfamiliar. Facility location, electronic commerce, quantum entangled games – bring it on! You could help me with this. If you're a theory junkie who couldn't make it here and there's some talk you really wanted to hear, let me know and I might be able to give you some idea – the barest idea, or at the very least a fictional idea, of what it was like to be there… -- Posted By Lance to Computational Complexity at 10/26/2008 07:55:00 AM #1194 From: Lance <fortnow@...> Date: Sun Oct 26, 2008 7:48 pm Subject: [Computational Complexity] Pre-Lunch Session Send Email  More from Rahul A typical FOCS day divides up the same way as a day in a cricket test match: pre-lunch session, afternoon session and post-tea session. And approximately the same fraction of the audience is awake and/or paying attention? I set myself the goal of attending the first talk of the day, which naturally meant that I stumbled in around halfway through Manindra Agrawal's talk on "Arithmetic Circuits: A Chasm at Depth Four" (Agrawal and Vinay). This is one of my favorite papers in the conference: the result is surprising, easy to state and not hard to prove. The authors show that an exponential lower bound on arithmetic circuit size for depth-4 circuits actually implies an exponential lower bound on circuit size for general arithmetic circuits. There are precedents for this kind of depth reduction for arithmetic circuits (unlike for Boolean circuits), eg. the Berkowitz-Skyum-Valiant-Rackoff result that polynomial size circuits can be simulated by polylogarithmic-depth circuits, but the nice thing about this result is that it gives an explanation for why most recent progress on arithmetic circuit lower bounds has been in the very low depth regime. Manindra's talk was marred a little by technical glitches, but he has an unparalleled ability to stay cool in a crisis. Then came an excellent talk on Madhur Tulsiani on "Dense Subsets of Pseudorandom Sets" (Reingold, Trevisan, Tulsiani and Vadhan). This is a topic that Luca has said a lot about in his blog, and I certainly can't hope to better that. The final talk in the session was by Timothy Chow on "Almost-natural Proofs", about how the natural proofs framework of Razborov and Rudich ceases to be a barrier to proving circuit lower bounds if it is relaxed slightly. The talk was good, but I was even more intrigued by the speaker. He is unusually wide-ranging in his choice of problems to work on - he comments quite often on the FOM mailing list, and I know he's worked among other things on logics for polynomial time. It's good for our heavily algorithms-and-complexity centered community to have people from outside the mainstream working on our problems. You'll notice that most of the talks discussed so far are complexity-related - old habits die hard. But I did try compensating by going to Mihai Patrascu's talk on "Dynamic Connectivity: Connecting to Networks and Geometry" (Chan, Patrascu and Roditty), about data structures for dynamic connectivity in graphs where both vertex and edge updates are allowed. There did seem to be some elegant ideas involved, but the talk was geared for an audience which knew something already about dynamic data structures, and I didn't follow the details. This reminds me of why I don't tend to go to talks too far afield of areas I work in - presenters do tend to take certain things for granted in their audience, and it makes sense for them to do so, since most members of the audience probably have a reasonable background in the subject of the talk. Also, it wasn't the most enthusiastic of talks, but I am looking forward to Mihai's talks on "Succincter" and "(Data) Structures", both of which concern very natural questions that are interesting even to non-specialists in data structures. Next for me was another algorithmic talk: Chris Umans on "Fast Modular Composition in any Characteristic" (Kedlaya and Umans), which showed that the composition of two polynomials f and g of degree n modulo a third polynomial of the same degree can be computed very efficiently, in time that's nearly linear in n. The proof goes through an earlier reduction by Chris of the problem to simultaneous evaluation of multilinear polynomials on several points; the new contribution is a clever solution to this simultaneous evaluation problem through recursively applying Chinese remaindering. The talk, as is usual with Chris, was crisp and well-organized. That concluded my pre-lunch session – I wouldn't have been alert enough to do justice to the remaining talks. Stay tuned for a report on the rest of the day's play. -- Posted By Lance to Computational Complexity at 10/26/2008 02:46:00 PM #1195 From: Lance <fortnow@...> Date: Mon Oct 27, 2008 10:42 am Subject: [Computational Complexity] Business Meeting Send Email  Another FOCS report from Rahul The business meeting was compered by Paul Beame. The local chair Sanjeev Khanna spoke. 270 registered this year, as opposed to 250 last year and 220-230 the year before that. Excelsior! The P.C. report was given by the P.C. chair R.Ravi. Some salient features 349: not the number of reviewed papers, but rather the number of external reviewers June 13, 2008: the Night of the Long Knives, when as many as 150 papers had their hopes quashed. "Succincter": In an unprecedented double, Mihai's Best Paper Title awardee also wins the Machtey Best Student Paper award Dana Moshkovitz and Ran Raz win best paper for "Two-query PCP with Sub-Constant Error" Milena Mihail gave a presentation on FOCS 2009, which will be held in Atlanta from November 8-10 next year. Volunteers were requested for hosting FOCS 2010. Three hands shot up instantly, and after a long and contentious debate culminating in a beer-bottle fight... Nah. We all got to vote nine days before the big day. David Shmoys was elected vice-chair of the IEEE Technical Committee on Mathematical Foundations of Computing. A couple of major Prize announcements were made; no suspense involved for readers of this blog. Volker Strassen is the recipient of the Knuth Prize, and Spielman and Teng have won the Godel Prize for their paper on smoothed analysis of linear programming. NSF Report by Sampath Kannan, who's a Division Director in the CCF (Computing and Communications Foundations) program. We theorists seem to have a nice program of our own now directly under CCF called Algorithmic Foundations, which covers most of the traditional STOC/FOCS areas, and has a budget over$30 million for this year. Grant deadlines coming up pretty soon actually: for the Large grants on October 31, for the Medium grants shortly afterward in November, and for the Small ones in December. There was also some information on relevant cross-cutting funding opportunities. STOC 2009 will be held May 31- June 2 in Bethesda, Maryland. The submission server is already active. Title and short abstract are due November 10, extended abstract is due November 17. Pierre Mckenzie recapitulated CCC 2008 in 30 seconds, and then announced that CCC 2009 would be held in Paris. Paris, France, as a matter of fact; excited murmurs from the audience. For a brief moment there complexity theory was cool. David Johnson announced that SODA 2009 would be held January 4 - January 6 in New York, and that Volker Strassen would be giving his Knuth Prize lecture there. Miscellaneous announcements, including a long-overdue archiving of old FOCS proceedings in IEEE Xplore ( I believe the only ones left are '60,'61, '63 and '67) and news about the establishment of the Center for Computational Intractability in Princeton, funded by the NSF/CISE Expeditions grant announced earlier on this blog. That was one of the duller business meetings in recent memory. Which is a good thing, of course – it means we're all happy and getting along. -- Posted By Lance to Computational Complexity at 10/27/2008 05:41:00 AM

#1196 From: Lance <fortnow@...>
Date: Mon Oct 27, 2008 12:52 pm
Subject: [Computational Complexity] Post-Lunch Highlights, Day 1

Send Email

 More from Rahul at FOCS I realize I could quite easily spend more time writing about the talks then going to them, so I'll keep it shorter Gil Kalai gave a vastly entertaining talk on "Elections can be Manipulated Often" (Kalai and Friedgut and Nisan). Clearly a topic of current relevance, but Gil bravely resisted making an Obama-McCain reference. The main result of the paper essentially says that the cumulative manipulation power (meaning their capacity to bring about their preferred global outcome by voting in a way that's different from their actual ranking of the candidates) in any "reasonable" election (where the voting scheme is far from yielding a dictatorship, and is insensitive to the names of the candidates) is invariably high, in a setting where voter preferences are modelled by random choices. The main open question from the talk was to prove that the current financial crisis was unavoidable. Mihai Patrascu gave a talk on his paper "Succincter", which won the Machtey best student paper award. His paper greatly advances the state of the art in the field of succinct data structures. For instance, the paper shows how to store N "trits" (numbers that are either 0,1 or 2) in N log(3) + O(1) bits of memory with only logarithmic query time. The proof uses a clever recursive encoding scheme. Dana Moshkovitz spoke about her Best Paper-winning work with Ran Raz on constructing sub-constant error 2-query PCPs of nearly linear size. This has been a major open problem for quite a while now, and the resolution of this open problem yields improvements to a large family of hardness of approximation results. The proof builds on earlier work by Moshkovitz and Raz constructing sub-constant error low degree tests of nearly linear size. There's been precious little controversy at this conference, so I feel duty-bound to stir something up. There's an asymmetry between the two seminar rooms – one is considerably larger, wider and has better acoustics than the other. Is it a coincidence that the more complexity-oriented talks in Session B are in the smaller room, and the algorithms-oriented Session A talks in the larger not? I think not. This is merely the latest instance in the step-sisterly treatment accorded to complexity theory down the ages, right from when Euclid designed his algorithm but failed to analyze its complexity. I urge all right-minded readers to express their outrage at this state of affairs. More seriously, can't think of a hitch; even lunch yesterday was good. Kudos to Sanjeev Khanna, Sudipto Guha, Sampath Kannan and the student volunteers. -- Posted By Lance to Computational Complexity at 10/27/2008 07:49:00 AM

#1197 From: Lance <fortnow@...>
Date: Tue Oct 28, 2008 1:57 pm
Subject: [Computational Complexity] The Gossip Center

Send Email

#1198 From: GASARCH <gasarch@...>
Date: Wed Oct 29, 2008 6:57 pm
Subject: [Computational Complexity] Math NAMES

Send Email

 (A quick note: Howard Karloff is organizing an excursion to the South Pacific during SODA.) And now for todays post: What do the names Pascal, Hilbert and Mittag-Leffler have in common, aside from all being names of math folks? I have AN answer in mind. You may very well give me other ones. I'll be curious if anyone give me the one I had in mind. -- Posted By GASARCH to Computational Complexity at 10/29/2008 01:55:00 PM

#1199 From: Lance <fortnow@...>
Date: Thu Oct 30, 2008 11:33 am
Subject: [Computational Complexity] Election Special

Send Email

 A hodgepodge of election stuff, in my last post before November 4. The markets and polls both suggest that the election is not that close. There's no question who will win in my (and Obama's) home state. The senate race here is even less interesting with the Republicans not even bothering to mount a serious campaign against the incumbent and Majority Whip, Dick Durbin. At least the house race in my district is tight with the incumbent, Mark Kirk, a moderate republican running against Dan Seals, an up and coming African-American. Sounds familiar. Before voting, check out the answers Obama and McCain both gave to Science Debate 2008. Keep in mind that science will likely get a back seat given the tight budget constaints that we will have in the next several years. McCain's promise to freeze most federal spending will be particularly bad for science. In the category of "Good Searching means Less Privacy" comes the Contributions Search Engine on the New York Times site. Type in a relatively unique name like "Fortnow" and you'll discover my $255 donation to the Obama campaign. In full disclosure, I also donated$100 to Obama in the primaries. So why $255? Did I really mean to donate the maximum I could in one unsigned byte? Actually$255 = $1000/4 +$5, with the \$5 coming from one of my daughters who wanted to add her own contribution. I still live in a base-ten world. -- Posted By Lance to Computational Complexity at 10/30/2008 06:32:00 AM

#1200 From: GASARCH <gasarch@...>
Date: Fri Oct 31, 2008 3:49 pm
Subject: [Computational Complexity] Focs Guest Post: Going to talks outside Complexity

Send Email

 (Guest post from Rahul Santhanam) As promised, I did do a bit of experimenting this FOCS in terms of attending talks from other areas. Here's an encapsulation of the results: Cryptography - Yevgeny Vahlis spoke about his negative result with Boneh, Papakonstantinou, Rackoff and Waters on black-box constructions of identity-based cryptosystems (cryptosystems in which an arbitrary string can be used as a public key) from trapdoor permutations. ID-based cryptosystems have been extensively studied recently in work of Dan Boneh and others. Security of known constructions is proved in the random oracle model, and it would be interesting to base security on the same assumptions as those used in standard public-key cryptography. This result indicates this might be too much to hope for, at least using standard techniques. Learning - Adam Klivans gave an excellent talk on "Learning Geometric Concepts via Gaussian Surface Area" (with O'Donnell and Servedio). There's been a lot of interesting work recently on learning in the presence of random or adversarial noise. It's known that functions with low noise sensitivity, or even with low Gaussian noise sensitivity, can be learned efficiently in this setting, but these quantities are hard to bound in general. The current work resolves some open problems about learnability of geometric concept classes by using some deep mathematical results relating Gaussian noise sensitivity to Gaussian surface area, and working with the latter more tractable quantity. Games with Entangled Provers - I went to a couple of talks on another currently hot topic: the power of two-prover games with entangled provers. A semi-definite programming formulation due to Tsirelson can be used to show that the optimal value can be computed efficiently for a very simple form of 2-prover games, known as XOR games. The first talk that I attended was given by Thomas Vidick on "Entangled Games are Hard to Approximate" (with Kempe and Kobayashi and Matsumoto and Toner) - he showed a reduction from the problem of approximating the value of 2-prover games for classical provers (known to be NP-complete) to the problem of approximating the value of 3-prover games for entangled provers. A complementary talk by Ben Toner on "Unique Games with Entangled Provers are Easy" (with Kempe and Regev), showed that for unique games, the optimal value can actually be approximated efficiently using "quantum rounding" to a semi-definite program. The most interesting open problem in this area seems to be to derive some sort of upper bound on the complexity of approximating the value for general games. Algorithms - I heard Dimitris Achlioptas speak on "Algorithmic Barriers from Phase Transitions" (with my colleague Amin Coja-Oghlan). This paper tries to understand the reason why simple algorithms for constraint satisfaction problems on random instances fail in the region around the threshold, by showing that this "region of failure" coincides with the region where the structure of the solution space changes from being connected to being (essentially) an error-correcting code. Rather intriguing that a computational fact, the success or failure of standard algorithms, is so closely related to a combinatorial artifact. Finally, I went to Grant Schoenebeck's talk on proving lower bounds for the Lasserre hierarchy of semi-definite programming formulations of CSP problems, which is interesting because a number of different algorithmic techniques including the Arora-Rao-Vazirani formulation of Sparsest-Cut can be implemented in the lower levels of the hierarchy. Grant's result uses a connection to the width complexity of resolution, which I found very surprising, but then my understanding of this area is rather naive... All interesting stuff, but I'd actually prefer FOCS to be even broader, and representative of all areas of theory. If that requires multiple sessions, so be it. Currently, FOCS seems to be an algorithms, complexity and combinatorics conference with papers from other areas filtering in unpredictably depending on the composition of the committee. It's very creditable that the standard of papers has remained consistently high (or even improved) over the years. But with several major sub-areas (semantics and verification, concurrency theory, database theory, theory of distributed computing, computational geometry, computational biology etc.) represented barely or not at all, it's quite hard to still make the case that FOCS and STOC are absolutely central to theoretical computer science as a whole. I guess the question is how much we value FOCS and STOC being core conferences? -- Posted By GASARCH to Computational Complexity at 10/31/2008 10:47:00 AM

#1201 From: GASARCH <gasarch@...>
Date: Fri Oct 31, 2008 3:51 pm
Subject: [Computational Complexity] Focs Wrap Up

Send Email

 (Guest post from Rahul Santhanam) As promised, I did do a bit of experimenting this FOCS in terms of attending talks from other areas. Here's an encapsulation of the results: Cryptography - Yevgeny Vahlis spoke about his negative result with Boneh, Papakonstantinou, Rackoff and Waters on black-box constructions of identity-based cryptosystems (cryptosystems in which an arbitrary string can be used as a public key) from trapdoor permutations. ID-based cryptosystems have been extensively studied recently in work of Dan Boneh and others. Security of known constructions is proved in the random oracle model, and it would be interesting to base security on the same assumptions as those used in standard public-key cryptography. This result indicates this might be too much to hope for, at least using standard techniques. Learning - Adam Klivans gave an excellent talk on "Learning Geometric Concepts via Gaussian Surface Area" (with O'Donnell and Servedio). There's been a lot of interesting work recently on learning in the presence of random or adversarial noise. It's known that functions with low noise sensitivity, or even with low Gaussian noise sensitivity, can be learned efficiently in this setting, but these quantities are hard to bound in general. The current work resolves some open problems about learnability of geometric concept classes by using some deep mathematical results relating Gaussian noise sensitivity to Gaussian surface area, and working with the latter more tractable quantity. Games with Entangled Provers - I went to a couple of talks on another currently hot topic: the power of two-prover games with entangled provers. A semi-definite programming formulation due to Tsirelson can be used to show that the optimal value can be computed efficiently for a very simple form of 2-prover games, known as XOR games. The first talk that I attended was given by Thomas Vidick on "Entangled Games are Hard to Approximate" (with Kempe and Kobayashi and Matsumoto and Toner) - he showed a reduction from the problem of approximating the value of 2-prover games for classical provers (known to be NP-complete) to the problem of approximating the value of 3-prover games for entangled provers. A complementary talk by Ben Toner on "Unique Games with Entangled Provers are Easy" (with Kempe and Regev), showed that for unique games, the optimal value can actually be approximated efficiently using "quantum rounding" to a semi-definite program. The most interesting open problem in this area seems to be to derive some sort of upper bound on the complexity of approximating the value for general games. Algorithms - I heard Dimitris Achlioptas speak on "Algorithmic Barriers from Phase Transitions" (with my colleague Amin Coja-Oghlan). This paper tries to understand the reason why simple algorithms for constraint satisfaction problems on random instances fail in the region around the threshold, by showing that this "region of failure" coincides with the region where the structure of the solution space changes from being connected to being (essentially) an error-correcting code. Rather intriguing that a computational fact, the success or failure of standard algorithms, is so closely related to a combinatorial artifact. Finally, I went to Grant Schoenebeck's talk on proving lower bounds for the Lasserre hierarchy of semi-definite programming formulations of CSP problems, which is interesting because a number of different algorithmic techniques including the Arora-Rao-Vazirani formulation of Sparsest-Cut can be implemented in the lower levels of the hierarchy. Grant's result uses a connection to the width complexity of resolution, which I found very surprising, but then my understanding of this area is rather naive... All interesting stuff, but I'd actually prefer FOCS to be even broader, and representative of all areas of theory. If that requires multiple sessions, so be it. Currently, FOCS seems to be an algorithms, complexity and combinatorics conference with papers from other areas filtering in unpredictably depending on the composition of the committee. It's very creditable that the standard of papers has remained consistently high (or even improved) over the years. But with several major sub-areas (semantics and verification, concurrency theory, database theory, theory of distributed computing, computational geometry, computational biology etc.) represented barely or not at all, it's quite hard to still make the case that FOCS and STOC are absolutely central to theoretical computer science as a whole. I guess the question is how much we value FOCS and STOC being core conferences? -- Posted By GASARCH to Computational Complexity at 10/31/2008 10:50:00 AM

#1202 From: GASARCH <gasarch@...>
Date: Mon Nov 3, 2008 4:04 pm
Subject: [Computational Complexity] Will fivethirdyeight.com use this poll?

Send Email

 I had my Graduate Class in Communication Complexity VOTE FOR PREZ in a secret ballot. The rules were: They could vote for anyone on the ballot in Maryland, (I supplied a ballot with all of the candidates in Maryland) OR write someone in OR write NONE OF THE ABOVE. Here are the candidates and how many votes they got. Obama-Biden, Democrat party: 11 votes McCain-Palin, Republican: 1 vote McKinney-Clemente, Green: 1 vote Barr-Root, Libertarian: 0 vote Nader-Gonzalez, Independent: 1 vote Baldwin-Castle, Constitution: 0 votes None of the Above: 1 vote. Speculation: Usually the Republican does better than this, and McCain is one of those Republicans who (perhaps used to) appeal to moderates and even democrats. So why only one vote? (1) `I used to like McCain before he turned rightward this campaign'', (2) My class actually likes Obama. In the past there were people who couldn't stand Kerry or Gore, so while they didn't like Bush, they voted for him anyway. Usually I get one or two Libertarian voters. Why note this year? Speculation: (1) They think of Barr are really being a republican, (2) They think Obama really does transcent party and ideology. (3) The libertarians are tired of having their votes not count. (4) They got confused by the butterfly ballot that I used. Our student body is somewhat liberal. Or maybe its just that the students at Univ of MD interested in applying Ramsey Theory to Communiation Complexity are somewhat liberal. One of my colleagues was amazed that McCain got one vote. Of the 15 people in my class, around 10 are from other countries. This could be bad news for Mccain. There is that classic saying: as goes Graduate Communication Complexity, so goes the nation. Is it foolish to look at a small poll from an obviously biased source? Perhaps yes, but compare that to the networks yammering on enlessly about small shifts here and there. -- Posted By GASARCH to Computational Complexity at 11/03/2008 10:02:00 AM

 Messages 1173 - 1202 of 1688   Oldest  |  < Older  |  Newer >  |  Newest