The Yahoo! Groups Product Blog
- Members: 121
- Category: Algorithms
- Founded: Jan 13, 2003
- Language: English
Yahoo! Groups Tips
Did you know...
Message search is now enhanced, find messages faster. Take it for a spin.
Show Message Summaries
Sort by Date
| Some sad news from Thomas Schwentick.
What we have feared during the last weeks and months came true yesterday
Lautemann died at the age of 53 from cancer.
Most recently Lautemann had been working on logical characterizations
of complexity classes but I will remember him most for his beautiful
proof (or here)
that BPP, the class of languages with efficient
probabilistical computatins, is in the second level of the polynomial-time hierarchy. In 1983 Sipser had shown BPP in the fourth level, and very
soon after Gács and Lautemann independently showed BPP in the second
level. Lautemann gave a very simple combinatorial
proof that I consider one of the prettiest application of the
probabilistic method to complexity.
Posted by Lance to Computational Complexity at 4/30/2005 07:31:00 AM
| Leonid Khachiyan passed away Friday at the age of 52. Khachiyan was
best known for his 1979 ellipsoid algorithm giving the first
polynomial-time algorithm to solve linear
programming. While the simplex algorithm solved LP well in
practice, Khachiyan gave the first formal proof of an efficient
algorithm in the worst case. The ellipsoid algorithm also has
applications for more general convex programming questions and can
be used for approximating semidefinite programming used for example in
algorithm approximating max-cut.
One of my first exposures to theoretical computer science came in high
school when I read a New
York Times article describing the algorithm. I later learned that
article has become our standard example of bad science writing,
focusing more on an unlikely link to NP-complete problems instead of just describing the important theoretical problem
Khachiyan's algorithm does solve.
Mr. Khachiyan's method is believed to offer an approach for the linear
programming of computers to solve so-called "traveling
salesman" problems. Such problems are among the most intractable
in all of mathematics…In the past, "traveling
salesman" problems, including the efficient scheduling of airline
crews or hospital nursing staffs, have been solved on computes using
the "simplex method".
New York Times science writing (and science writing in general) has
vastly improved since those days.
Posted by Lance to Computational Complexity at 5/2/2005 09:10:00 AM
| Some interesting game theory and philosophy from the last couple of NUMB3RS
episodes. Usual spoiler warnings.
In the April 22nd episode Dirty Bomb there were three
suspects who wouldn't talk. Charlie, the mathematician, likened the
situation to Prisoner's
Dilemma and suggested putting the suspects in the same room, which
is usually the wrong thing to do in prisoner's dilemma. What Charlie
did was compute the utility for each suspect cooperating (with each
other and not the FBI) based on family considerations and their
previous record and convinced the one with the most to lose by
cooperating to defect and talk to the FBI. Clever, but I really wonder
if that would work in real life.
Last Friday's episode Sacrifice took a more philosophical
direction. A murdered think-tank computer scientist was developing a
program that measured academic potential based on where someone grew
up, down to a city block. If such a program actually worked, how
should a program be used, if at all? How far should one go to stop the
Charlie and his physicist friend Larry ruminated on whether scientists
are responsible for how their research gets used, as well as a
discussion on the lonely life of a scientist at a lightly attended
memorial service for the murder victim. The episode also had a physics
joke I don't quite get.
Applied physicists are from Venus; Theoretical physicists wonder why
it spins in the other direction.
I really enjoy those discussions between Charlie
and Larry because they ask some interesting questions and add some
dimension to a public view of mathematicians and scientists.
Posted by Lance to Computational Complexity at 5/3/2005 07:00:00 AM
| While other fields have standardized postdoc programs, computer
science still searches for the right approach for postdocs. While we
have had postdoc positions since I can remember, quite often
researchers have gone straight from Ph.D. to tenure-track positions
particularly in times of high growth in CS departments (early-mid 80s
and mid-late 90s).
We are now seeing a spike in the demand for postdocs for several
Alas we don't have an increase in postdoc supply to go along with this
demand. Many of the industrial research labs have shrunk and hire few
or no postdocs. Meanwhile most US academic departments have no
permanent postdoc programs and CS grants are rarely large enough to
cover the cost of a postdoc (as opposed to Canadian and European
groups which tend to have more postdocs). Just another way the US is
losing strong researchers to other countries.
- A tightening job market means less tenure-track jobs so more
people opt to do postdocs to build up their CVs. Several researchers
are even taking second and third postdocs, not long ago a rarity in
- Many students opt to delay a tenure-track position
for a year and do a postdoc first. The commitment goes both ways, if
a department is holding a position for a student then that student is
committing to going to that department. It's not fair to go back on
the job market during that postdoc year. Some departments are becoming
more reluctant to allow the year delay because of bad experiences with
students not fulfilling that commitment.
- More and more students attend graduate school in their home
countries and hope to eventually return to permanent jobs in those
countries but take postdoc positions elsewhere to get a broader view
of the field.
Posted by Lance to Computational Complexity at 5/4/2005 08:49:00 PM
| Science has a special issue
on Distributed High Performance Computing including an article Service-Oriented
Science by Chicago's own Ian Foster. The must read is an editorial
Endless Frontier Postponed by Ed Lazowska and Dave Patterson.
At a time when global competitors are gaining the capacity and
commitment to challenge U.S. high-tech leadership, this changed
landscape threatens to derail the extraordinarily productive interplay
of academia, government, and industry in IT. Given the importance of
IT in enabling the new economy and in opening new areas of scientific
discovery, we simply cannot afford to cede leadership. Where will the
next generation of groundbreaking innovations in IT arise? Where will
the Turing Awardees 30 years hence reside? Given current trends, the
answers to both questions will likely be, "not in the United
Also from the CRA:
The timing of the issue also couldn't be better, given the House
Science Committee will hold a full committee hearing on "The Future of
Computer Science Research in the U.S." on Thursday, May 12th. You can
watch it live on the Science Committee's real-time webcast (also archived).
Posted by Lance to Computational Complexity at 5/6/2005 09:04:00 AM
| A few years ago, one ranking listed Harvard as the number one
engineering school. The schools were ranked by average starting salary
of their graduates and both of Harvard's engineering students that
year got good jobs.
I could do several posts on rankings but let us focus on rankings of
Ph.D. programs in computer science. One cannot give a linear
ordering of departments: One department might have a strong theory
group and another strength in systems. Even within theory, one
department could have strong algorithms while another group has strong
complexity. Even then one's graduate experience depends more on their
individual advisor than the department as a whole.
Nevertheless Americans like statistics and ordering things including
CS departments. There are two major rankings of Ph.D. programs:
The US News and World Report ranking
last updated in 2002 and uses a methodology
of giving questionnaires to department chairs and directors of
graduate study. The NRC ranking
looks at a slew of different statistics but hasn't been updated since
1993. I hear rumors that the NRC will do a new ranking soon.
The US News ranking captures perception of strength instead of
strength itself, often based one's opinion of a department from a few
years back. On the other hand, the purpose of rankings are perception
as we wish to be perceived as a strong department. We all complain
about the rankings but they affect us greatly, in recruiting students
(Americans especially use the rankings to choose a Ph.D. program) and
recruiting faculty. When hiring faculty we often rightly or wrongly
give extra weight to Ph.D.s from higher-ranked departments. Deans and
provosts use the rankings to allocate resources as higher rankings
lead to more prestige for the university as well.
For example, if a mid-level department wishes to have the strongest
quality faculty they should hire mostly in one area, as people like
to join groups with people they know, respect and can work with. But
this approach won't help much in the rankings so most departments try
for a broad faculty with likely lower overall quality but a better
At least forty departments have a stated goal of being a top-ten
department. The pigeonhole principle guarantees many won't end up happy.
Posted by Lance to Computational Complexity at 5/7/2005 06:10:00 PM
| April Edition
If Cook made the P versus NP question interesting to logicians, Karp
made the question important to everyone else.
Richard Karp, Reducibility Among Combinatorial Problems,
Proceedings of a Symposium on the Complexity of Computer Computations,
All the general methods presently known for computing the chromatic
number of a graph, deciding whether a graph has a Hamiltonian circuit,
or solving a system of linear inequalities in which the variables are
constrained to be 0 or 1, require a combinatorial search for which the
worst case time requirement grows exponentially with the length of the
input. In this paper we give theorems which strongly suggest, but do
not imply, that these problems, as well as many others, will remain
Karp's paper used Cook's
Theorem that satisfiability was NP-complete to prove 21 other problems NP-complete, including Clique, Integer
Programming, Hamiltonian Cycle, Chromatic Number, Partition and Max
Cut. Many of these problems arise from real-world
optimization problems and Karp showed that if any of them have
efficient algorithms then they all do. Researchers would later extend
Karp's techniques to show hundreds if not thousands of natural
problems are NP-complete.
Karp's paper named the classes P and NP and defined the notion of
reduction (now often called Karp Reduction) where a language A
reduces to a language B if there is a polynomial-time function f such
that x∈A iff f(x)∈B. He defined a notion "polynomial
complete" as the set of problems A in NP that every other
NP-problem reduces to A, a notion we now call NP-complete. Karp also
makes the argument that the definitions are robust to different reasonable
encodings of the problem and different models of Turing machines.
Karp also alludes to the polynomial-time hierarchy.
If P=NP then NP is closed under complementation and polynomial-bounded
existential quantification. Hence it is also closed under
polynomial-bounded universal quantification. It follows that a
polynomial-bounded analogue of Kleene's Arithmetic Hierarchy becomes
trivial if P=NP.
Karp lists three NP problems whose classification was open at the
- LINEAR INEQUALITIES: Basically Linear Programming, shown to be in
P in 1979 by the recently departed Khachiyan.
- NONPRIMES: Shown to be in P in 2002 by Agrawal, Kayal and Saxena.
- GRAPH ISOMORPHISM: Not known to be in P but if GI is NP-complete then
the polynomial-time hierarchy collapses which was shown in 1987
by combining results of Goldreich-Micali-Wigderson, Goldwasser-Sipser
Posted by Lance to Computational Complexity at 5/9/2005 06:49:00 PM
| Deep into the bowels of the Physical Sciences Library I went to
retrieve the Proceedings of a Symposium on the Complexity of Computer
Computation to track down Karp's famous paper for my last
post. Later I discovered one of my fellow Chicago professors,
Janos Simon, attended that conference as a young grad student and
still had the proceeding in his office.
The symposium held March 20-22, 1972 at IBM Yorktown marked a shift in theoretical computer science towards more algorithms and complexity
from logic, automata and grammars. The symposium attracted 225
registrants on par with our current STOC and FOCS conferences. From
The symposium dealt with complexity studies closely related to how
computations were actually performed on computers. Although this area
of study has not yet found an appropriate or generally accepted name,
the area is recognized by a significant commonality in problems,
approaches, and motivations. The area can be described and delineated
by examples such as the following.
The symposium had a panel discussion on the state of the field, with a
panel that included four future Turing Award winners:
Robert Floyd, John Hopcroft, Richard Karp and Michael Rabin. The
proceedings has a transcription of the panel session. Here are some
- Determining lower bounds on the number of operations or steps
required for computational solutions of specific problems such as
matrix and polynomial calculations, sorting and other combinatorial
problems, iterative computations, solving equations, and computer
- Developing improved algorithms for the solution of such problems
which provide good upper bounds on the number of required operations,
along with experimental and theoretical evidence concerning the
efficiency and numerical accuracy of those algorithms.
- Studying the effects on the efficiency of computation brought
about by variations in sequencing and the introduction of parallelism.
- Studying the relative complexity of classes of problems with respect
to lower bounds on computation time. In this effort, specific problems
are classified as having equivalent difficulty of computation; for
example, those problem which can be solved in a number of operations
which is a polynomial function of the size of the input.
Ray Miller (Moderator): There are four general questions.
Numerical and combinatorial algorithmic people, with some small
exceptions, never did integrate well. But Blum and the computational
complexity people did come "into the fold" and algorithms
and complexity would come to dominate theoretical computer science in
the US. The naming issue never did have a satisfactory solution, the
Europeans call it Theory
A. And I would be very happy with a "traditional
ordinary" proof of P≠NP but I strongly doubt the
solution is so simple.
Floyd responding to the first question: Slowly.
- How is the theory developing from originally being a scattering of
a few results on lower bounds and some algorithms into a more unified
- What specific examples have been found to demonstrate how real
computer computations were improved from studies of this type?
- Is the progress of numerical-type computations and understanding
of them much ahead of the combinatorial?
- Are there important open questions?
Karp: We need a to find a name for our subject. "Computational
Complexity" is too broad in view of the work of Blum and others,
at least until we can gather them into our fold; "Concrete
Computational Complexity" is too much like civil engineering;
"Complexity of Computer Computations" doesn't ring true;
Markov has already taken "theory of algorithms" away from
us…Getting these materials into university curricula, particularly the
undergraduate curriculum, is important and it's going to happen
There are lots of things that computer people do that aren't
mathematical in the nineteenth century sense. We manipulate strings,
we prove theorems, we retrieve information, we manipulate algebraic
formulas. Looking at the primitives appropriate to these domains, we
can get a much richer class of problems.
Charles Fiduccia: Working against unification is the failure of
specifying a model for the computation. There seems to be too much
emphasis on what is being computed and little emphasis on the model.
Hopcroft (responding to Floyd): You could change "slowly" to
"it's not". Karp has shown many of these problems complete,
the implication being that therefore these problems are hard. That
might not be the case, they could even be algorithms that run in
linear time. On the other hand, the fact that we know so little about
lower bounds provides a lot of interest in the area.
Albert Meyer: The obstacle is not that we don't have a good
formulation of a machine model. We have many different models,
but we can't prove lower bounds for any of them.
Mike Patterson: Suppose somebody were to prove P≠NP then this would
be of great personal and dramatic interest. But if it were to be
established by a quite traditional, ordinary argument, which is just
happened that nobody had thought of, then we would see in retrospect
that it was the wrong problem.
As a side note, had I been able to find Karp's paper online I would
never have read about this symposium that marked the new direction of theoretical computer science research.
Posted by Lance to Computational Complexity at 5/11/2005 07:45:00 AM
| One of our gradate students wanted to define a property P of classes C
to hold if we don't know that C has complete sets. Doing so would make the
concept time dependent. IP would have property P in 1989 but not in
1991 after we knew IP=PSPACE. One can easily show "If P=BPP then BPP has
complete sets" but we don't have "If P=BPP then BPP has property P" rather we would have to say "If we know P=BPP then
BPP has property P." Location can also be an issue. If the
Martians have a proof that P=BPP then BPP has property P on Mars but
not on Earth. Temporal
Logic might give us a way to reason about such statements but
basing them on unknown mathematical facts seems strange.
Suppose someone proves the polynomial-time hierarchy collapses. Then
it didn't collapse because it was always collapsed and in fact was
never a true hierarchy. The only reason we call it a hierarchy today
is because we don't know that it isn't.
In mathematical reasoning we can't know whether a statement is true
unless we have a proof of that statement. However once we have a proof
of a theorem like P≠NP then not only do we have P≠NP now but in
fact P≠NP was always true even before we had the proof. Despite
this we can't help thinking that theorems become true as we see a
proof. A year ago undirected connectivity wasn't in log space and now
it is. However the theorems that were proven before I started graduate
school were all true since the dawn of time.
Posted by Lance to Computational Complexity at 5/12/2005 01:12:00 PM
| The US House Committee on Science held a hearing yesterday on The Future of Computer Science Research in the U.S. You can watch the webcast or read the testimony. The CRA Blog also has it covered.
Thomas Friedman's column today talks about how the US no longer dominates the ACM programming contest.
A couple of links about tenure. An Inside Higher Ed article on the strange policies for tenure decisions (thanks Jeff) and a Chicago Tribune op-ed piece arguing against tenure altogether.
Posted by Lance to Computational Complexity at 5/13/2005 09:11:00 AM
| In my last
post you can find some links and comments about the
tenure system. Let me add a few concerns that don't often get
- Tenure keeps academic salaries low: Tenure has a definite
monetary value as when combined with life and disability insurance
removes nearly all risk of future income. A university can then shave
the salary in comparison to other lines of work. I suspect the
shaving is high in the humanities where positions outside academia
has a higher risk of long-term under or un-employment.
allows universities to age discriminate: Because hiring with
tenure requires a large long-term commitment, departments have to hold
candidates for open tenured positions to a much higher standard than
for open assistant professor positions. Often departments will only consider candidates
for the assistant professor position. Without tenure, US law would
prevent holding candidates to different standards because of age for
basically the same position.
- Tenure Jail: Since most jobs
in the US are not secure, one can change careers midstream without
entailing much additional risk. Many tenured professors would be
reluctant to leave their risk-free position for another career, even
if that other opportunity would make them happier and possibly more
Posted by Lance to Computational Complexity at 5/15/2005 07:33:00 PM
| Guest Post by Boaz Barak
The National Science Foundation (NSF) has a program called "Theory of Computing" which is the only program devoted to
funding research in theoretical computer science. We use here "Theoretical Computer Science" in a broad sense,
which includes research in Algorithms,
Computational Complexity, Cryptography, Computational
Learning, Network algorithms, etc. (Of course theoreticians can and do also apply to more application-oriented programs such as cybertrust
Although the ToC program never had a large budget, looking at the projects and people it funded throughout its
history, we can safely say that it had a huge impact much beyond the boundaries of TCS and even beyond the boundaries
of Computer Science at large. Indeed, much of the initial research on field-transforming concepts like Quantum
Computing, Boosting of learning algorithms, Cryptographic Protocols, Interactive Proofs, and more was supported by this
Unfortunately, the budget of ToC program has been more or less stagnant at approximately $7M per year since 1989 (in dollars without adjusting for inflation the ToC budget was $6.4M in 1989, $5.1M in 2004, and will be $7.2M in 2005). Needless to say, in these 15 years the costs of research (such as faculty salaries, student stipends etc.) grew even beyond the global rate of inflation. Also the overall CS budget at NSF tripled during this time. Thus the
ToC program budget that was merely insufficient in 1990 and dangerously insufficient in 1999
is now at what can only be described as a crisis level. This is a situation that must be corrected if we wish
our field to continue to thrive. Given TCS's achievements in the last few decades and challenges for the future, this should concern not only
theoretical computer scientists.
What can we, TCS faculty in the U.S., do to fix this?
- First of all, educate ourselves about the funding situation
and ways to solve it. If you can, please come to the business meeting at the upcoming STOC, where these issues will be discussed.
- We need to participate more in the NSF, this includes reviewing proposals, participating in panels, and
volunteering for positions. NSF administrators are actually quite appreciative and
supportive of theory, but the situation will not change without active community involvement. In particular, please consider volunteering for the position of director of the ToC program.
- We need to educate others about what we do. This includes bright math-inclined high school kids who could
potentially become TCS researchers, educated adults (including also other scientists), and also other computer scientists.
We need more popular science books, general audience lectures, essays and op-eds.
Posted by Lance to Computational Complexity at 5/16/2005 02:35:00 PM
| George Dantzig passed
away last Friday. In the 1940s Dantzig invented linear
programming and developed the simplex
method for solving LP.
Simplex works well in practice but it remains open whether simplex
runs in polynomial time for worst-case inputs (though see this paper
by Spielman and Tang). Dantzig's death comes
just two weeks after the passing of Leonid Khachiyan who had the first
provably efficient algorithm for linear programming three decades
after Dantzig developed the simplex method. Khachiyan's
ellipsoid algorithm is not at all practical as compared with the
Posted by Lance to Computational Complexity at 5/17/2005 07:32:00 PM
| I was 13 when I went to see the first Star Wars movie on opening
weekend in 1977 in New York City when it was just called "Star
Wars" without a subtitle or episode number. Theater staff handed
out buttons saying "May the Force be with you." We had no
idea what that meant. We then entered the theater and saw a great movie.
That first movie remains my favorite of the Star Wars series to date,
with the movie's single tight finale and the "Force" more
mysterious than real. Special effects in movies have gotten so good
that they can no longer wow you like they could back then.
In the early 90's a Chicago professor gave a welcome lecture to the incoming
freshman and ended by saying "May the Force be with
you". Most of the students had no idea what he was talking about. I felt
old that day.
As the final Star Wars chapter officially opens in the US today, my oldest
daughter is only three years younger than I was when the first movie
arrived. The movie has gotten good reviews and I look forward to
reliving my youth, being with the Force and traveling one last time to
that galaxy far far away.
Posted by Lance to Computational Complexity at 5/19/2005 06:43:00 AM
| Most US universities have ended their academic year and moved into the
summer season. I like summer not so much for the weather (it gets hot
and muggy in Chicago) but for a relaxed research atmosphere. Less
courses and more importantly virtually no faculty meetings of any kind
give us the time to put some concentrated effort into research.
Summer is also the conference season. We have conferences and
workshops year round but many organizers like to have their
conferences in the summer when they won't conflict with
courses. Instead we have conferences conflicting with each other. Be
careful that you don't want to go to too many conferences as they cut
into your the summer relaxed research atmosphere.
I plan to attend at least two conferences this summer, STOC, which starts this
weekend in the beautiful suburbs of Baltimore and, of course, the
on Computational Complexity next month in San Jose. Stop by and say hi if you are there.
It's not summer yet in Chicago. We run in quarters at the University
of Chicago and have two more weeks of classes followed by finals
week. I can go to STOC missing only one day of classes but there were
some conferences and workshops later on I will have to skip for finals
week and graduation.
We get our revenge in the fall where most universities start at the
beginning of September or earlier and our classes don't start until
the last week of September. We hardly see any conferences scheduled in
September, particularly in the US, because it is the beginning of most
universities semesters. So I use September to visit faculty at other
schools. We used to take vacations in September (crowds are smaller
everywhere) but now the kids have school starting in late August
making our effective summer quite short.
Posted by Lance to Computational Complexity at 5/20/2005 09:02:00 AM
| I'm liveblogging the business meeting. Keep it here. |
Posted by Lance to Computational Complexity at 5/22/2005 06:58:00 PM
| My liveblogging experiment didn't quite work as planned. I seemed to
have lost half of what I wrote and then my battery died. So here is
some basic info from the meeting.
The conference had several good surveys commemorating Larry Stockmeyer
who passed away last summer. Stockmeyer's advisor Albert Meyer gave a
describing how they worked together and giving an interesting
small result in Stockmeyer's thesis that certain sets created through
diagonalization have i.o.-speedup. I also posted the slides and paper
from my Stockmeyer lecture.
- Most of the discussion was on theory funding and on the STOC
republication policy and most of those
from yesterday. Check out the new Theory Matters site advocating
increased theory funding.
- The Gödel
Prize went to Noga Alon, Yossi Matias and Mario Szegedy for their
paper The space
complexity of approximating the frequency moments.
- Omer Reingold and Vladimir Trifonov won the best paper and best
student paper awards respectively for their algorithms for undirected
- Future Conferences: Complexity 2005 in San Jose, California June
12-15. Early registration deadline is Friday. FOCS 2005 in Pittsburg
STOC 2006 in Seattle
May 20-23, Complexity
2006 in Prague July 16-20, STOC 2007 and
Complexity 2007 as part of FCRC in San Diego June 9-16 and STOC 2008
will be in Victoria.
- Check out the poster of the NP-completeness and the new DIMACS Implementation
Complexity theory is well represented in this year's conference with some
very nice papers in extractors, derandomization, PCP construction,
hardness amplification and much more. Check out the program to see more.
On Friday and Saturday nights, the STOC hotel hosted proms from local
schools. It's easier to explain baseball to non-Americans than the
concept of a prom where high school students wear fancy clothes and
spend large amounts of money for a single party.
Posted by Lance to Computational Complexity at 5/23/2005 08:18:00 AM
| A Chicago undergrad Amanda Redlich gave a presentation and used the
(Complex-ity). Clever. Of course this should never be confused with
Today's Chicago Tribune has an AP article on the
British craze of a Japanese number game Sudoku. In this puzzle you have
a 9x9 grid subdivided into 9 3x3 grids. The goal is to fill the full grid
with numbers 1 through 9 such that each number appears exactly once on
each row, column and subgrid given some initial partial setting.
As a computational complexity theorist, I immediately wondered
how hard is the generalized game on an n2xn2
grid. A little googling shows the problem is NP-complete, shown in a 2003
thesis of Takayuki Yato at the University of Tokyo. His proof uses
a simple reduction from the Latin Squares problem proved
NP-complete by Charles Colbourn.
Posted by Lance to Computational Complexity at 5/25/2005 01:35:00 PM
| Ravi Kumar and D. Sivakumar, the local organizers of the upcoming
Conference on Computational Complexity in San Jose, ask that I post
the following. I hope to see you all there.
The early registration deadline for Complexity 2005 is 5 pm EDT on
FRIDAY, MAY 27, 2005
(Eastern Daylight Time == 4 hrs behind Coordinated Universal Time (UTC/GMT)).
Please take special note of the time: though the conference is on the
Pacific Coast, early registration ends 5pm Eastern Time.
When you register for the conference, if you are not an IEEE Member but a
SIGACT/EATCS Member, please enter that number (e.g., SIGACT xxxxx)
to qualify for the discounted rate.
Please consider staying at the Conference hotel,
Hyatt Sainte Claire; besides being convenient,
it will help limit the conference expenses.
In San Jose and around the Silicon Valley,
you will experience a unique combination of cultures and cuisines
(American, Asian, European, Mexican) like nowhere else.
There is a large number of restaurants, coffee shops, and bars
within walking distance from the
conference venue; these include highly-rated
upscale restaurants as well as hole-in-the-wall type places that serve
authentic food from around the world.
you could even get
falafels that pass
stringent standards, and
South Indian food
certified by your local organizers as the best outside of Chennai.
If you're one of those poor souls that happen to be vegetarian/vegan at
a theory conference,
relax -- there's Good Karma, White Lotus, and Vegetarian House
within walking distance.
During mid-June, San Jose is an absolutely pleasant place to be, with
daytime highs close to 80 degrees Fahrenheit (about 27 degrees Celsius),
and night time lows near 55 degrees Fahrenheit (about 13 degrees Celsius).
Downtown San Jose, where the conference will be located, has numerous
interesting places: the Cesar Chavez Plaza and the Tech Museum of Innovation
are right across from the hotel. The Center for the Performing Arts (CPA), the
San Jose Repertory Theatre, the San Jose Museum of Art, San Jose State
University, as well as the light rail station, are all within
walking distance. CalTrain station (to go to San Francisco) is only about
a mile away.
The Repertory features
Exceptions to Gravity by
Avner Eisenberg during some of the conference days.
CPA has the Festival of Cultures by the SJ Jazz Society, and an American
Musical Theatre show during some of the conference days
The Museum of Art (no entry fee!) has the
Blobjects and Beyond : The New Fluidity in Design exhibit during all
Tech Museum of Innovation is a
museum that you should absolutely not miss when you're in town; during June,
the IMAX theater there features a limited-screening edition of BATMAN, plus
the Mysteries of the Nile -- be sure to check it out.
If you're bringing children along, they will definitely enjoy the
Children's Discovery Museum,
within walking distance of the conference.
Unfortunately, Major League Soccer's
San Jose Earthquakes are playing
Chivas USA on the road in Los Angeles.
Posted by Lance to Computational Complexity at 5/26/2005 05:32:00 PM
| My cell phone received a breaking new alert yesterday: The
FDA is investigating a link between the impotence drug Viagra and
blindness. The story also made the front page of today's Chicago
Tribune. Look carefully though and you'll notice 38 reports of
blindness among the 23 million Viagra users. Even if the drug
directly caused the blindness the numbers translate to a 0.00017%
chance of losing your sight using Viagra. Breaking news indeed. You
have a much greater chance losing your sight not using safety goggles
in the workplace and not have as much fun in the process.
This is an example of what I call newspaper odds. If some people's
misfortune appears in the newspapers then the odds are so low that you
really shouldn't worry about it. High school mass shootings. Mad Cow
Disease. Carbon Monoxide Deaths. No significant need to worry about
When deaths become too common to appear in a newspaper then you need
to take notice and act carefully, say with automobile accidents or
AIDS. Of course a cause of death might not appear in a newspaper
simply because it doesn't happen, like recent US major commercial
airline disasters. How to we tell the difference: celebrities. If a
celebrity dies of AIDS or gets seriously injured in an automobile
accidents, newspapers will cover it and remind us that these remain
serious concerns for us all.
Posted by Lance to Computational Complexity at 5/28/2005 06:57:00 AM
| The quality of conference presentations have, on average, much improved
over the past decade or two. Why? Certainly technological improvements
like PowerPoint and advanced LaTeX macros have helped. As our field
gets more specialized, talks in general theory conferences have to
appeal to a wider audience which tend to improve the presentation. Or
perhaps I'm just remembering only the bad talks from the good old
Despite the increase in quality, I find myself going to fewer and
fewer talks in general theory conferences. I learn much more
talking directly to my fellow computer scientists. As for the
presentations, I can read the papers later.
A fellow computer scientist suggested that we hire a company to
videotape the talks and make them available on the web. A back of the
envelope calculation suggested we could make this happen for about
$10 extra per participant for a reasonably sized conference. I am not
a fan of making talks available on the web. Outside of a conference,
who has time to sit at a computer screen and watch talks. I also worry
about giving people yet another reason not to go to a conference.
Remember the most important aspect of a scientific conference are not
the talks and papers but bringing members of the community together.
Posted by Lance to Computational Complexity at 5/30/2005 09:48:00 PM
| Language has never been my strong suit. I didn't speak full sentences
until I was five. I had a 220 point spread between my verbal and math
SAT scores. I fumbled through three years of high school French (which
required some summer school). This knowledge of French was only useful
a couple of times. Wandering the streets of Paris, a women asked me
Quelle heure est-il? and I knew enough to show her my watch but
enough to actually tell her the time. Also I saw Secrets & Lies in
France and sometimes the French subtitles made more sense than the
heavily accented English.
During my undergraduate years at Cornell I struggled and gave up on
Spanish. Luckily a linguistics professor had a theory that people who
had trouble learning English early (like me) would have too much
difficulty in picking up a new language, so I could take an intro
linguistics course to cover my language requirement. Pretty cool as we
covered context-free languages simultaneously in linguistics and in my
introduction to theoretical computer science class.
In graduate school my three years of high school French got me out of
the Ph.D. language requirement. If English was not the lingua franca
of our field, I would be in serious trouble. I've always been
impressed how many non-native speakers of English have succeeded in
I spent an entire year on sabbatical in Amsterdam but only learned
enough Dutch to navigate the supermarkets and order in
restaurants. Most Dutch speak English (and 3-4 other languages) and my
attempts to say most Dutch words usually got responses in
English. Still I definitely missed something as when I left a
conversation the language shifted to Dutch and I couldn't get back in.
Suppose I could retroactively master a single foreign language, what
language should it be? At times I would have liked to know Dutch,
German, Hebrew, Japanese and the occasional French, Spanish, Danish,
Italian and Portuguese. In the future I suspect I would visit
countries speaking Hungarian, Russian, Chinese, Swedish and many
others. I've gotten very good at navigating in countries where I don't
know the language. In most European countries I can pass as a local as
long as I keep my mouth shut.
The University of Chicago has a rather strict TOEFL requirement that
would likely have caused a problem for me had I grown up in say
Germany. Our department also has a small foreign language requirement
for the Ph.D. Foreign language requirements made sense in a different
era when papers were written in many languages. I remember a scene in
graduate school where my advisor Mike Sipser and some Russian speaking
students poured over the latest paper by Razborov translating from the
Russian and hoping to understand Razborov's next great result. But now
with nearly all papers written in English the requirement seems like a
relic from a bygone time. Perhaps we should require every student to
take the test in French, for France still has a few researchers
stubborn enough to keep writing in their native tongue.
Posted by Lance to Computational Complexity at 6/1/2005 08:37:00 AM
| Two great open questions from the early 70's:
Now that we know
the answer to the latter, can a resolution of P versus NP be far
behind? I certainly hope the proof of P≠NP is not as anticlimactic
as finding out Deep Throat's identity.
- Is P≠NP?
- Who was Deep Throat?
Posted by Lance to Computational Complexity at 6/2/2005 06:12:00 AM
| Toda's famous theorem
states that the polynomial-time hierarchy reduces to
counting problems (in complexity terms PH ⊆ P#P). His proof
uses two lemmas:
Here is a straightforward proof of the first lemma using relativizable
versions of previously known results.
We often call results like Zachos (2 above) a "pigs
can fly" theorem because we don't believe the assumption in this
case that NP is in BPP. This proof shows that relativization can give
pigs wings and lead to some interesting containments.
- ⊕P⊕P=⊕P (Papadimitriou-Zachos)
- NP⊆BPP implies PH⊆BPP (Zachos and also
- NP⊆BPP⊕P (follows easily from Valiant-Vazirani)
- NP⊕P⊆BPP⊕P⊕P (relativize 3
- NP⊕P⊆BPP⊕P (apply 1)
- NP⊕P⊆BPP⊕P implies PH⊕P⊆BPP⊕P (relativize 2 to ⊕P)
- PH⊕P⊆BPP⊕P (use 5 and 6)
- PH⊆BPP⊕P (immediate corollary of 7)
Posted by Lance to Computational Complexity at 6/3/2005 06:29:00 AM
| An old math joke:
Three friends from college went on to become a doctor, lawyer and a
mathematicians. They met back at reunion and the discussion went to
whether it was better to have a wife or a mistress.
In that vein, to everyone at the Oberwolfach Complexity workshop,
I wish I could attend but I have a conflict with the ACM Conference on Electronic
Commerce. To everyone at EC sorry I couldn't be there but there is
a complexity workshop in Oberwolfach. Now leave me alone and let me do
The doctor said "a wife" because having a monogamous
relationship limited the risk of disease.
The lawyer said "a mistress" to avoid all of those nasty
legal obligations of marriage.
The mathematician said "Both." "Both?" echoed the
doctor and lawyer simultaneously. The mathematician responded "Of
course both. That way your wife thinks you are with the mistress, the
mistress thinks you are with the wife and finally you have time
to do some math."
Posted by Lance to Computational Complexity at 6/6/2005 09:24:00 AM
| Should you have jokes in talks? Too much humor can detract from your
real work but a little laughter can lighten up an otherwise dry
presentation. You must use jokes with care. You should avoid any
offensive jokes: nothing sexist, racist, homophobic or sexual
innuendos. Many jokes are funny only in context and in a major
conference it will be hard to find context with people from different
religions, countries, backgrounds and many of whom do not have English
as their native language.
Some topics to be careful with:
What can you joke about? Make fun of yourself and your research
(without insulting other's research). Make fun of your friends if they
can take a joke and other people know who they are. Make fun of George
Bush, Donald Rumsfeld and the religious right. Make fun of the French
(okay maybe you shouldn't make fun of the French though they are such
an easy target). Most of all just make fun and keep your talk
- Popular Culture: Most scientists even many American scientists
have no clue what occupies the minds of most Americans. Even a Michael
Jackson joke would likely fall flat at our conference. A Star Wars
joke might work on a majority of our crowd but too many of them feel
that anything that is popular should be ignored. One exception is
children's popular culture: Not that anyone likes Barney but you can't
avoid him, especially if you have young kids.
- Politics: Since our
field lies in such a narrow band in American politics, political jokes
are fine as long as they sit in this band (i.e. making fun of Bush and
his cronies). But a seemingly harmless joke outside this band will be
considered "offensive". I once talked about a paper by
Allender and Gore and said "but this is not the Gore that
invented the internet." Didn't go over very well.
- The P
versus NP problem: Some things are too important to joke about.
Posted by Lance to Computational Complexity at 6/7/2005 10:17:00 PM
| Jeff Erickson makes an important point in his post
on the SoCG (Computational Geometry) business meeting. Links and emphasis are his.
Finally, and most importantly, there was no discussion of the theory community's efforts to
increase NSF funding for theoretical computer science, as there
was at the (also
beer-free) STOC business meeting. One question in particular was
never asked: Are we computational geometers still even part of the
theory community? The answer should be a resounding
NO!, followed by a slap to the back of the headâ€”of
course computational geometry is part of theory! Look, we have
big-Oh notation! Unfortunately, reality seems to disagree. None of
the new material on TheoryMatters mentions
computational geometry at all, although it does mention another border
community: machine learning. With few exceptions, the computational
geometry community rarely submits results to STOC and FOCS; this was
not true ten years ago. Lots of geometric algorithms are published at
STOC/FOCS by people outside the SOCG community, but nobody
calls them computational geometry. (Sanjeev Arora's TSP
approximation algorithms are the most glaring example.) For many
years, computational geometry has been funded by a different NSF
program than the rest of theoretical computer science. (This worked
to our advantage when graphics was getting lots of money, but that
advantage is now gone.) At one infamous SODA program committee
meeting a few years ago, one PC member remarked that nobody at SODA
was interested in computational geometry, they have their own
conference, they should just send their results there. (This
declaration led another PC member to resign.) Apparently, the divorce
has been a complete success.
Not just computational geometry, but the COLT (Computational Learning
Theory) and the LICS (Logic in Computer Science) communities used to
have their best papers in STOC/FOCS but now we see few of their papers
in the standard theory conferences. As the theory community grew larger
and broader, the STOC and FOCS conferences started to emphasize
certain areas in theory. Those areas which were not greatly
represented felt some resentment and started putting more and more
emphasis on their own specialty conferences, in some cases eventually
abandoning STOC and FOCS altogether.
The Conference on Computational Complexity started in 1986 as the
Structure in Complexity Theory Conference (Structures) by some
researchers who felt their interests of complexity were not being well
represented in STOC and FOCS. This view becomes
self-fulfilling—sometimes very good papers would be turned down from
STOC and FOCS because they were considered a "better fit"
for Structures. In response we changed the name in
1996 and brought a broader view in complexity to the conference
(though not without some controversy) and tried to work our way back into the
Other conferences like COLT, LICS and SoCG have moved the other
direction. Note that SoCG also decided not to join the Federated
Conference in 2007 while both STOC and Complexity will be there. I
don't expect to see COLT or LICS at FCRC either.
What can we do, if anything? STOC and FOCS cannot properly cover the broad
range of areas that have ever been considered theory. Unless we have a
major restructuring of how the general theory conferences operate, we
will continue to shrink the vision of theory as the area continues to grow.
Posted by Lance to Computational Complexity at 6/8/2005 06:13:00 PM
| The University of Chicago has four
graduation convocations in the spring quarter spread throughout
today and tomorrow. The first session (mostly law students) has just
marched past my office window. I will march in the second session this
afternoon which includes the liberal arts graduate students.
My Ph.D. student Rahul Santhanam (co-advised with Janos Simon) will
receive his diploma this afternoon. He did his thesis work on time
hierarchies and next year will be a postdoc working with Valentine
Kabanets at Simon Fraser University in Vancouver. Rahul is officially
my fifth Ph.D. student following Carsten Lund, Lide Li, Sophie
Laplante and Dieter van Melkebeek, all of whom graduated in my
Also from our theory group, Daniel Stefankovic graduates today. He did
his thesis on "Curves on Surfaces, String Graphs, and Crossing
Numbers" and will be an assistant professor at the University of
Rochester in the fall.
Call me a romantic but I really enjoy the pageantry of the graduation
ceremony. I enjoy putting on the gown and the hood (even with those
drab MIT colors) and marching past the parents as a member of the
faculty and see the students come one by one, especially my own
students, and receive their degrees. Chicago has a wonderful ceremony
led by bagpipes in the front of the procession and the nice tradition
of rarely having outside speakers (a major exception was Bill
Clinton during his presidency). The ceremony was even more impressive
when it was held in the Rockefeller
Chapel but even with four ceremonies the chapel is not large
enough to hold all the family members who want to attend.
Posted by Lance to Computational Complexity at 6/10/2005 09:47:00 AM
| Howdy from the 20th
IEEE Conference on Computational Complexity in San Jose,
California. Last night we had a short business meeting with beer and
wine but without much controversy. Dieter van Melkebeek was elected to
the organizing committee. Next year's conference will be held in
Prague July 16-20 and in 2007 we will join STOC and many other
conferences at the Federated Computing Research Conference (FCRC) June
9-16 in San Diego. During the Program Committee Chair Report, Luca
Trevisan made the point that even by theoretical computer science
standards, the computational complexity conference has a small
female representation. Something to keep in mind.
My favorite talk on the first day came from the best student paper
winner, Ryan Williams on Better Time-Space Lower
Bounds for SAT and Related Problems though I'm a bit biased since
he's improving on some of my earlier work. He shows SAT cannot be
solved by a random-access machine using nc time and
no(1) space for c slightly larger than the square root of
3 (about 1.732) improving on the previous lower bound of 1.618. He
had several clever ideas recursing on the previous techniques. One can
hope that by extending these techniques to push the lower bound to any
c<2. Above 2 you seem to lose any advantage from doing recursion.
Today Manuel Blum given an invited talk taking "steps towards a
mathematical theory of consciousness." More on that and the rest of the
Posted by Lance to Computational Complexity at 6/13/2005 08:39:00 AM
Just address an email to firstname.lastname@example.org
Jump to a particular message