advertisement

Kernel Lower Bounds

67 %
33 %
advertisement
Information about Kernel Lower Bounds
Education

Published on March 7, 2014

Author: ASPAK2014

Source: slideshare.net

advertisement

Lower Bounds on Kernelization Venkatesh Raman Institiue of Mathematical Sciences, Chennai March 6, 2014 Venkatesh Raman Lower Bounds on Kernelization

Some known kernelization results Linear: MaxSat – 2k clauses, k variables Venkatesh Raman Lower Bounds on Kernelization

Some known kernelization results Linear: MaxSat – 2k clauses, k variables Quadratic: k-Vertex Cover – 2k vertices but O(k 2 ) edges Venkatesh Raman Lower Bounds on Kernelization

Some known kernelization results Linear: MaxSat – 2k clauses, k variables Quadratic: k-Vertex Cover – 2k vertices but O(k 2 ) edges Cubic: k-Dominating Set in graphs without C4 – O(k 3 ) vertices Venkatesh Raman Lower Bounds on Kernelization

Some known kernelization results Linear: MaxSat – 2k clauses, k variables Quadratic: k-Vertex Cover – 2k vertices but O(k 2 ) edges Cubic: k-Dominating Set in graphs without C4 – O(k 3 ) vertices Exponential: k-Path – 2O(k) Venkatesh Raman Lower Bounds on Kernelization

Some known kernelization results Linear: MaxSat – 2k clauses, k variables Quadratic: k-Vertex Cover – 2k vertices but O(k 2 ) edges Cubic: k-Dominating Set in graphs without C4 – O(k 3 ) vertices Exponential: k-Path – 2O(k) No Kernel: k-Dominating Set is W-hard. So is not expected to have kernels of any size. Venkatesh Raman Lower Bounds on Kernelization

Some known kernelization results Linear: MaxSat – 2k clauses, k variables Quadratic: k-Vertex Cover – 2k vertices but O(k 2 ) edges Cubic: k-Dominating Set in graphs without C4 – O(k 3 ) vertices Exponential: k-Path – 2O(k) No Kernel: k-Dominating Set is W-hard. So is not expected to have kernels of any size. In this lecture, we will see some techniques to rule out polynomial kernels. Venkatesh Raman Lower Bounds on Kernelization

OR of a language Definition Let L ⊆ {0, 1}∗ be a language. Then define Or(L) = {(x1 , . . . , xp ) | ∃i such that xi ∈ L} Definition Let t : N → N {0} be a function. Then define Ort (L) = {(x1 , . . . , xt(|x1 |) ) | ∀j |xj | = |x1 |, and ∃i such that xi ∈ L} Venkatesh Raman Lower Bounds on Kernelization

Distillation Let L, L ⊆ {0, 1}∗ be a pair of languages and let t : N → N {0} be a function. We say that L has t-bounded distillation algorithm if there exists a polynomial time computable function f : {0, 1}∗ → {0, 1}∗ such that f ((x1 , . . . , xt(|x1 |) )) ∈ L if and only if (x1 , . . . , xt(|x1 |) ) ∈ Ort (L), and |f ((x1 , . . . , xt(|x1 |) )| ≤ O(t(|x1 |) log t(|x1 |)). Venkatesh Raman Lower Bounds on Kernelization

Fortnow-Santhanam Theorem (FS 09) Suppose for a pair of languages L, L ⊆ {0, 1}∗ , there exists a polynomially bounded function t : N → N {0} such that L has a t-bounded distillation algorithm. Then L ∈ NP/poly. In particular, if L is NP-hard, then coNP ⊆ NP/poly. Venkatesh Raman Lower Bounds on Kernelization

Outline of proof of Fortnow Santhanam theorem NP-complete problem L with A, a t-bounded distillation algorithm. Venkatesh Raman Lower Bounds on Kernelization

Outline of proof of Fortnow Santhanam theorem NP-complete problem L with A, a t-bounded distillation algorithm. Use A to design NDTM that, with a “polynomial advice”, can decide L in P-time. Venkatesh Raman Lower Bounds on Kernelization

Outline of proof of Fortnow Santhanam theorem NP-complete problem L with A, a t-bounded distillation algorithm. Use A to design NDTM that, with a “polynomial advice”, can decide L in P-time. L ∈ NP/poly ⇒ coNP ⊆ NP/poly and we get the theorem! Venkatesh Raman Lower Bounds on Kernelization

Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Venkatesh Raman Lower Bounds on Kernelization

Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Venkatesh Raman Lower Bounds on Kernelization

Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Ln = {x ∈ L : |x| ≤ n}. Venkatesh Raman Lower Bounds on Kernelization

Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Ln = {x ∈ L : |x| ≤ n}. given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)]) / A maps it to y ∈ L ≤α(n) Venkatesh Raman Lower Bounds on Kernelization

Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Ln = {x ∈ L : |x| ≤ n}. given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)]) / A maps it to y ∈ L ≤α(n) we want to obtain a Sn ⊆ L α(n) with |Sn | polynomially bounded in n such that Venkatesh Raman Lower Bounds on Kernelization

Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Ln = {x ∈ L : |x| ≤ n}. given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)]) / A maps it to y ∈ L ≤α(n) we want to obtain a Sn ⊆ L α(n) with |Sn | polynomially bounded in n such that If x ∈ Ln - ∃ strings x1 , · · · , xt(n) ∈ Σ n with xi = x for some i such that A(x1 , · · · , xt(n) ) ∈ Sn Venkatesh Raman Lower Bounds on Kernelization

Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Ln = {x ∈ L : |x| ≤ n}. given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)]) / A maps it to y ∈ L ≤α(n) we want to obtain a Sn ⊆ L α(n) with |Sn | polynomially bounded in n such that If x ∈ Ln - ∃ strings x1 , · · · , xt(n) ∈ Σ n with xi = x for some i such that A(x1 , · · · , xt(n) ) ∈ Sn If x ∈ Ln - ∀ strings x1 , · · · , xt(n) ∈ Σ n with xi = x for some i, / A(x1 , · · · , xt(n) ) ∈ Sn / Venkatesh Raman Lower Bounds on Kernelization

How will the nondeterministic algorithm work? Having Sn as advice gives the desired NDTM which when given x such that |x| = n, checks whether x ∈ L in the following way. Guesses t(n) strings, x1 , · · · , xt(n) ∈ Σ n Venkatesh Raman Lower Bounds on Kernelization

How will the nondeterministic algorithm work? Having Sn as advice gives the desired NDTM which when given x such that |x| = n, checks whether x ∈ L in the following way. Guesses t(n) strings, x1 , · · · , xt(n) ∈ Σ n Checks whether one of them is x Venkatesh Raman Lower Bounds on Kernelization

How will the nondeterministic algorithm work? Having Sn as advice gives the desired NDTM which when given x such that |x| = n, checks whether x ∈ L in the following way. Guesses t(n) strings, x1 , · · · , xt(n) ∈ Σ n Checks whether one of them is x Computes A(x1 , · · · , xt(n) ) and accepts iff output is in Sn . Venkatesh Raman Lower Bounds on Kernelization

How to get Sn A : (Ln )t → L ≤α(n) Venkatesh Raman Lower Bounds on Kernelization

How to get Sn A : (Ln )t → L ≤α(n) y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with xi = x for some i and A(x1 , · · · , xt(n) ) = y Venkatesh Raman Lower Bounds on Kernelization

How to get Sn A : (Ln )t → L ≤α(n) y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with xi = x for some i and A(x1 , · · · , xt(n) ) = y We construct Sn by iteratively picking the string in L ≤α(n) which covers the most number of instances in Ln till there are no strings left to cover. Venkatesh Raman Lower Bounds on Kernelization

How to get Sn A : (Ln )t → L ≤α(n) y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with xi = x for some i and A(x1 , · · · , xt(n) ) = y We construct Sn by iteratively picking the string in L ≤α(n) which covers the most number of instances in Ln till there are no strings left to cover. Let us consider one step of the process. Let F be the set of uncovered instances in Ln at the start of step. Venkatesh Raman Lower Bounds on Kernelization

How to get Sn A : (Ln )t → L ≤α(n) y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with xi = x for some i and A(x1 , · · · , xt(n) ) = y We construct Sn by iteratively picking the string in L ≤α(n) which covers the most number of instances in Ln till there are no strings left to cover. Let us consider one step of the process. Let F be the set of uncovered instances in Ln at the start of step. By PHP there exists a string y ∈ L ≤α(n) such that A maps at least |F |t(n) |L ≤α(n) | tuples in F t(n) to y . Venkatesh Raman Lower Bounds on Kernelization

How to get Sn (Cont.) At least |F |t(n) |L ≤α(n) | 1/t(n) = |F | |L ≤α(n) | 1/t(n) strings in F are covered by y in each step. Venkatesh Raman Lower Bounds on Kernelization

How to get Sn (Cont.) At least |F |t(n) |L ≤α(n) | 1/t(n) = |F | |L ≤α(n) | 1/t(n) strings in F are covered by y in each step. We can restate the above statement, saying that at least ϕ(s) fraction of the remaining set is covered in each iteration, where 1 1 = (α(n)+1)/t(n) ϕ(n) = 1/t(n) 2 |L ≤α(n) | Venkatesh Raman Lower Bounds on Kernelization

How to get Sn (Cont.) At least |F |t(n) |L ≤α(n) | 1/t(n) = |F | |L ≤α(n) | 1/t(n) strings in F are covered by y in each step. We can restate the above statement, saying that at least ϕ(s) fraction of the remaining set is covered in each iteration, where 1 1 = (α(n)+1)/t(n) ϕ(n) = 1/t(n) 2 |L ≤α(n) | There were 2n strings to cover at the starting. So, the number of strings left to cover after p steps is at most (1 − ϕ(n))p 2n ≤ 2n e ϕ(n)·p which is less than one for p = O(n/ϕ(n)). Venkatesh Raman Lower Bounds on Kernelization

How to get Sn (Cont.) At least |F |t(n) |L ≤α(n) | 1/t(n) = |F | |L ≤α(n) | 1/t(n) strings in F are covered by y in each step. We can restate the above statement, saying that at least ϕ(s) fraction of the remaining set is covered in each iteration, where 1 1 = (α(n)+1)/t(n) ϕ(n) = 1/t(n) 2 |L ≤α(n) | There were 2n strings to cover at the starting. So, the number of strings left to cover after p steps is at most (1 − ϕ(n))p 2n ≤ 2n e ϕ(n)·p which is less than one for p = O(n/ϕ(n)). So, the process ends after O(n/ϕ(n)) ≤ n · 2(α(n)+1)/t(n) steps, which is polynomial in n since α(n) = O(t(n) log(t(n))). Venkatesh Raman Lower Bounds on Kernelization

Take away A few comments about the theorem coNP ⊆ NP/poly implies PH = Σ3 . p The theorem gives us the collapse even if the distillation algorithm is allowed to be in co-nondeterministic. Main message is, that if you have t(n) instances of size n, you can not get an instance equivalent to the Or of them in polynomial time of size O(t(n) log t(n)) Venkatesh Raman Lower Bounds on Kernelization

How to use the theorem to prove kernel lower bounds We know that NP-complete problems can not have a distillation algorithm unless coNP ⊆ NP/poly. Venkatesh Raman Lower Bounds on Kernelization

How to use the theorem to prove kernel lower bounds We know that NP-complete problems can not have a distillation algorithm unless coNP ⊆ NP/poly. We want to define some analogue of distillation to produce an instance (x, k) of a parameterized problem L , starting from many instances of an NP-complete language L. Venkatesh Raman Lower Bounds on Kernelization

How to use the theorem to prove kernel lower bounds We know that NP-complete problems can not have a distillation algorithm unless coNP ⊆ NP/poly. We want to define some analogue of distillation to produce an instance (x, k) of a parameterized problem L , starting from many instances of an NP-complete language L. We call such an algorithm a composition algorithm. We will define it formally in the next slide. Venkatesh Raman Lower Bounds on Kernelization

How to use the theorem to prove kernel lower bounds We know that NP-complete problems can not have a distillation algorithm unless coNP ⊆ NP/poly. We want to define some analogue of distillation to produce an instance (x, k) of a parameterized problem L , starting from many instances of an NP-complete language L. We call such an algorithm a composition algorithm. We will define it formally in the next slide. The goal is that composition of an NP-complete language L into L , combined with a kernel of certain size for L , gives us distillation L. Venkatesh Raman Lower Bounds on Kernelization

How to use the theorem to prove kernel lower bounds We know that NP-complete problems can not have a distillation algorithm unless coNP ⊆ NP/poly. We want to define some analogue of distillation to produce an instance (x, k) of a parameterized problem L , starting from many instances of an NP-complete language L. We call such an algorithm a composition algorithm. We will define it formally in the next slide. The goal is that composition of an NP-complete language L into L , combined with a kernel of certain size for L , gives us distillation L. So, if we can show that a composition algorithm exists from L to L with desired properties, then L can not have a kernel of certain size. Venkatesh Raman Lower Bounds on Kernelization

Weak d-Composition ˜ (Weak d-composition). Let L ⊆ Σ ∗ be a set and let ∗ × N be a parameterized problem. We say that L weak Q⊆Σ d-composes into Q if there is an algorithm C which, given t strings x1 , x2 , . . . , xt , takes time polynomial in t |xi | and outputs an i=1 instance (y , k) ∈ Σ∗ × N such that the following hold: k ≤ t 1/d (maxt |xi |)O(1) i=1 The output is a YES instance of Q if and only if at least one ˜ instance xi is a YES-instance of of L. Theorem ˜ ˜ Let L ⊆ Σ ∗ be a set which is NP-hard. If L weak d-composes into the parameterized problem Q, then Q has no kernel of size O(k d− ) for all > 0 unless NP ⊆ coNP/poly. Venkatesh Raman Lower Bounds on Kernelization

Proof of the theorem Theorem ˜ ˜ Let L ⊆ Σ ∗ be a set which is NP-hard. If L weak d-composes into the parameterized problem Q, then Q has no kernel of size O(k d− ) for all > 0 unless NP ⊆ coNP/poly. Proof. Let xi = n ∀i ∈ [t(n)] for the input of composition. After applying the kernelization on the composed instance, the size of the instance we get is O(t(n)1/d nc )d− ) = O(t(n)1−( = O(t(s)) /d) c(d− ) n ) (for t(s) sufficiently large) = O(t(s) log t(s)) Venkatesh Raman Lower Bounds on Kernelization

Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . Venkatesh Raman Lower Bounds on Kernelization

Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . What if we can output an instance with k = t o(1) (n)O(1) ? Then we can rule out kernels of k d− for ALL d! Venkatesh Raman Lower Bounds on Kernelization

Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . What if we can output an instance with k = t o(1) (n)O(1) ? Then we can rule out kernels of k d− for ALL d! We call such an algorithm just “composition”. Venkatesh Raman Lower Bounds on Kernelization

Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . What if we can output an instance with k = t o(1) (n)O(1) ? Then we can rule out kernels of k d− for ALL d! We call such an algorithm just “composition”. Since theorem of Fortnow-Santhanam allows co-nondeterminism, so that allows using coNP compositions for proving lower bounds. Venkatesh Raman Lower Bounds on Kernelization

Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . What if we can output an instance with k = t o(1) (n)O(1) ? Then we can rule out kernels of k d− for ALL d! We call such an algorithm just “composition”. Since theorem of Fortnow-Santhanam allows co-nondeterminism, so that allows using coNP compositions for proving lower bounds. Sometimes getting composition from arbitrary instances of a language can be difficult. Venkatesh Raman Lower Bounds on Kernelization

Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . What if we can output an instance with k = t o(1) (n)O(1) ? Then we can rule out kernels of k d− for ALL d! We call such an algorithm just “composition”. Since theorem of Fortnow-Santhanam allows co-nondeterminism, so that allows using coNP compositions for proving lower bounds. Sometimes getting composition from arbitrary instances of a language can be difficult. Some structure on the input instances helps to get a composition (next slide). Venkatesh Raman Lower Bounds on Kernelization

Polynomial Equivalence Relation (Polynomial Equivalence Relation). An equivalence relation R on Σ ∗ is called a polynomial equivalence relation if the following two conditions hold: 1 2 There is an algorithm that given two strings x, y ∈ Σ ∗ decides whether x and y belong to the same equivalence class in (|x| + |y |)O(1) time. For any finite set S ⊆ Σ ∗ the equivalence relation R partitions the elements of S into at most (maxx∈S |x|)O(1) classes. Venkatesh Raman Lower Bounds on Kernelization

What to do with Polynomial Equivalence Relation The equivalence relation can partition the input on the basis of different parameters. These equivalence classes can be used to give the input to the composition a nice structure. The helpful choices are often partitions which have the same number of vertices, or the asked solution size etc. Then all we need to do, is to come up with a composition algorithm for instances belonging to same equivalence class. Since there are only polynomial number of equivalence classes, in the end we can just output an instance of Or(L ) Next slide is a nice illustration of this method by Michal Pilipczuk. Venkatesh Raman Lower Bounds on Kernelization

Proof Michał Pilipczuk No-poly-kernels tutorial 11/31

OR-SAT OR-SAT Proof Michał Pilipczuk No-poly-kernels tutorial 11/31

OR-SAT NP-hrd NP-hrd NP-hrd NP-hrd NP-hrd 11/31 No-poly-kernels tutorial Michał Pilipczuk NP-hrd NP-hrd NP-hrd OR-SAT NP-hrd ˜ L Proof

NP-hrd NP-hrd NP-hrd NP-hrd NP-hrd NP-hrd NP-hrd 1 2 2 2 k k k OR-SAT NP-hrd 1 11/31 No-poly-kernels tutorial Michał Pilipczuk OR-SAT NP-hrd 1 L Proof

NP-hrd NP-hrd NP-hrd NP-hrd NP-hrd NP-hrd NP-hrd L NP-hrd 1 1 1 2 2 2 k k OR-SAT NP-hrd OR-SAT Proof k cmp cmp cmp poly(k) poly(k) L poly(k) Michał Pilipczuk No-poly-kernels tutorial 11/31

NP-hrd NP-hrd NP-hrd NP-hrd NP-hrd 1 2 2 2 k k k OR-SAT NP-hrd 1 L NP-hrd kern kern kern 11/31 No-poly-kernels tutorial Michał Pilipczuk poly(k) poly(k) poly(k) cmp cmp cmp L NP-hrd 1 OR-SAT NP-hrd L Proof

NP-hrd NP-hrd NP-hrd NP-hrd 1 2 2 2 k k k OR-SAT NP-hrd 1 L NP-hrd kern kern kern 11/31 No-poly-kernels tutorial Michał Pilipczuk poly(k) poly(k) poly(k) cmp cmp L NP-hrd cmp L NP-hrd 1 OR-SAT NP-hrd OR-L Proof

NP-hrd NP-hrd NP-hrd NP-hrd 1 2 2 2 k k k OR-SAT NP-hrd 1 L NP-hrd kern kern kern 11/31 No-poly-kernels tutorial Michał Pilipczuk poly(k) poly(k) poly(k) cmp cmp L NP-hrd cmp L NP-hrd 1 OR-SAT NP-hrd OR-L Proof

Take away We use compositions to rule out polynomial kernels. Venkatesh Raman Lower Bounds on Kernelization

Take away We use compositions to rule out polynomial kernels. A composition from NP-hard problem L to parameterized problem L gives kernelization hardness for for L . Venkatesh Raman Lower Bounds on Kernelization

Take away We use compositions to rule out polynomial kernels. A composition from NP-hard problem L to parameterized problem L gives kernelization hardness for for L . k = t o(1) nc ⇒ No polynomial kernel. Venkatesh Raman Lower Bounds on Kernelization

Take away We use compositions to rule out polynomial kernels. A composition from NP-hard problem L to parameterized problem L gives kernelization hardness for for L . k = t o(1) nc ⇒ No polynomial kernel. k = t 1/d nc ⇒ No kernel of size k d− . Venkatesh Raman Lower Bounds on Kernelization

Take away We use compositions to rule out polynomial kernels. A composition from NP-hard problem L to parameterized problem L gives kernelization hardness for for L . k = t o(1) nc ⇒ No polynomial kernel. k = t 1/d nc ⇒ No kernel of size k d− . We can make use of equivalence classes to give structure to input of the composition. Venkatesh Raman Lower Bounds on Kernelization

Take away We use compositions to rule out polynomial kernels. A composition from NP-hard problem L to parameterized problem L gives kernelization hardness for for L . k = t o(1) nc ⇒ No polynomial kernel. k = t 1/d nc ⇒ No kernel of size k d− . We can make use of equivalence classes to give structure to input of the composition. Examples on the board! Venkatesh Raman Lower Bounds on Kernelization

Thank You! Venkatesh Raman Lower Bounds on Kernelization

Add a comment

Related presentations

Related pages

OBTAINING UPPER BOUNDS OF HEAT KERNELS FROM LOWER BOUNDS

OBTAINING UPPER BOUNDS OF HEAT KERNELS FROM LOWER BOUNDS ALEXANDER GRIGOR’YAN, JIAXIN HU, AND KA-SING LAU Abstract. We show that a near-diagonal lower ...
Read more

Kernel: Lower and Upper Bounds - Sophia - Inria

Kernel: Lower and Upper Bounds Daniel Lokshtanov∗ Saket Saurabh∗ May 21, 2009 Abstract Through this lecture note we try to provide a portal into the ...
Read more

www.math.uni-bielefeld.de

ON-DIAGONAL LOWER BOUNDS FOR HEAT KERNELS AND MARKOV CHAINS Thierry Coulhon Universit´e de Cergy-Pontoise 95302 Cergy Pontoise Cedex France email: coulhon ...
Read more

Kernel Lower Bounds using Co-Nondeterminism: Finding ...

This work further explores the applications of co-nondeterminism for showing kernelization lower bounds. The only known example prior to this work excludes ...
Read more

Obtaining Upper Bounds of Heat Kernels from Lower Bounds

Obtaining Upper Bounds of Heat Kernels from Lower Bounds ALEXANDER GRIGOR0YAN Universität Bielefeld JIAXIN HU Tsinghua University AND KA-SING LAU Chinese ...
Read more

Off-diagonal heat kernel lower bounds without Poincar´e

Off-diagonal heat kernel lower bounds without Poincar´e Thierry Coulhon∗ Universit´e de Cergy-Pontoise Thierry.Coulhon@math.u-cergy.fr 2 April 2003
Read more

CiteSeerX — Kernel: Lower and Upper Bounds

Kernel: Lower and Upper Bounds (2009) Cached. ... We exhibit through examples various tools to prove both lower and upper bounds on the kernel sizes. 1.
Read more

Kernel Lower Bounds Using Co-nondeterminism: Finding ...

Kernel Lower Bounds Using Co-nondeterminism: Finding Induced Hereditary Subgraphs Book Title Algorithm Theory – SWAT 2012 Book Subtitle
Read more

Kernel Bounds for Path and Cycle Problems

Kernel Bounds for Path and Cycle Problems Bart M. P. Jansen Joint work with Hans L. Bodlaender & Stefan Kratsch September 8th 2011, Saarbrucken
Read more