Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Problem With Proof?

  • 17-08-2011 2:51am
    #1
    Registered Users, Registered Users 2 Posts: 3,038 ✭✭✭


    Questions are offered when you see a [☺]:

    The following theorem, "the 'basic lemma' of the calculus of variations",
    on page 1 of this book is the following:

    "If f is a continuous function in [a,b], with a < b, s.t. ∫ₐᵇη(x)f(x)dx = 0 for
    an arbitrary function η continuous in [a,b] subject to η(a) = η(b) = 0 then
    f(x) = 0 in [a,b]"

    If you read the proof you'll see they go ahead & specify the function η by
    (x - x₁)(x₂ - x) & prove the claim using that, but [1] technically does that
    not just prove the theorem for this function alone, not for any "arbitrary"
    function? I see how this easily extends to positive functions though &
    obviously negative ones too.

    But [2] if we arbitrarily choose η to be the zero function s.t. η(x) is zero
    on [a,b] then f need not equal zero on [a,b] to satisfy ∫ₐᵇη(x)f(x)dx = 0.
    My concern here is that if we were to trust the method of proof used in
    the book for η = 0 then we'd conclude f = 0 when it need not be.
    If, on [a,b], η(x) = 0 & f(x) = 2x then ∫ₐᵇη(x)f(x)dx = ∫ₐᵇ0·2xdx = 0 but 2x ≠ 0.

    Assuming I'm right, we must modify the hypothesis to make η non-zero
    at least once on [a,b] & choose η so that it is non-zero at least once on
    [a,b], now [3] could it be considered a proof by way of contradiction to
    simply take advantage of the limit of a sum formulation of the integral &
    try to prove it using an arbitrary η:

    Using |∑η(xᵢ)f(xᵢ)δxᵢ - 0| < ε we see this reduces to|∑η(xᵢ)f(xᵢ)δxᵢ| < ε.
    As we've assumed η can be arbitrary if it's non-zero at least once on [a,b]
    then the sum ∑η(xᵢ)f(xᵢ)δxᵢ will equal at least one definite value as f is
    assumed to be non-zero on [a,b]. But now there exists an ε ≤ |∑η(xᵢ)f(xᵢ)δxᵢ|
    contradicting our original assumption that ∫ₐᵇη(x)f(x)dx = 0.

    But [4] this brings in to question another concern, f could be non-zero at
    every other point on [a,b] except at least at the non-zero value η(cᵢ)
    we're forced to assume exists, what I mean is:

    ∑η(xᵢ)f(xᵢ)δxᵢ = η(x₁)f(x₁)δx₁ + η(x₂)f(x₂)δx₂ + ... = 0·f(x₁)δx₁ + 0·f(x₂)δx₂ + ... + η(cᵢ)·0δxᵢ + ... = 0

    Here you'd satisfy the hypothesis by having the sum equal to zero but
    the conclusion doesn't follow! f(x) = 0 only at certain parts, very devious!
    The flaw lies in the inclusion of the phrase "arbitrary function" as far as I
    can see, I think it should be "arbitrary non-zero function". Thoughts?


Comments

  • Registered Users, Registered Users 2 Posts: 2,481 ✭✭✭Fremen


    [1]: I'm not sure why they would do this. Maybe they're trying to make the steps of the proof clear - I'd guess they're integrating by parts, are they?

    If you proved the result for all polynomials (x-x1)(x-x2)...(x-xn), then the theorem would hold by linearity of integration and the Stone-Weierstrass theorem. Maybe they're just showing how to do the inttegration in the simplest case.

    [2]:

    There's an analogy with vector spaces here (actually, it's more than an analogy, what we're looking at is a vector space modulo some technicalities). One way to prove that a vector V is 0 is to show that V.X = 0 for all X (this is the dot product, which I'm not bothered writing in latex).

    It's not sufficient to pick X=0, take the dot product V.0 = 0 and conclude that V=0. It has to hold for all X at the same time.

    Same thing with functions.

    Does this answer your concern about [3] and [4]?


  • Moderators, Science, Health & Environment Moderators Posts: 1,852 Mod ✭✭✭✭Michael Collins


    OK I'll admit I haven't gone through each of your questions in detail but I wonder could this just be a problem with notation here?

    The book says: Let [latex] f(x) [/latex] be a continuous function such that

    [latex] \displaystyle \int_a^b \eta(x) f(x) \hbox{d}x = 0[/latex]

    for any arbitrary function [latex] \eta(x) [/latex].

    i.e. this must be satisfied for ANY function, not just [latex] \eta(x) = 0 [/latex] but the one they give as well: [latex] \eta(x) = (x-x_1)(x-x_2) [/latex]. So it's a very strong condition.


  • Registered Users, Registered Users 2 Posts: 966 ✭✭✭equivariant


    Questions are offered when you see a [☺]:

    The following theorem, "the 'basic lemma' of the calculus of variations",
    on page 1 of this book is the following:

    "If f is a continuous function in [a,b], with a < b, s.t. ∫ₐᵇη(x)f(x)dx = 0 for
    an arbitrary function η continuous in [a,b] subject to η(a) = η(b) = 0 then
    f(x) = 0 in [a,b]"

    If you read the proof you'll see they go ahead & specify the function η by
    (x - x₁)(x₂ - x) & prove the claim using that, but [1] technically does that
    not just prove the theorem for this function alone, not for any "arbitrary"
    function? I see how this easily extends to positive functions though &
    obviously negative ones too.

    I have read through their proof and I see no problem. Their strategy is to show that if f is non zero at any point, that will somehow contradict the hypothesis. To arrive at this, they choose a particular function \eta (the one you mention) and show that if f(t) is non zero for some particular t and \eta is chosen correctly (ie the values of x_1 and x_2 are chosen sufficiently close to t) then the hypothesis of the lemma will be contradicted by that particular choice of \eta. But since the hypothesis is assumed to be true for ALL \eta, then our assumption that f(t) is non zero must in fact be false. But t was arbitrary, so we have shown that f is identically zero.


  • Registered Users, Registered Users 2 Posts: 3,038 ✭✭✭sponsoredwalk


    i.e. this must be satisfied for ANY function, not just [latex] \eta(x) = 0 [/latex] but the one they give as well: [latex] \eta(x) = (x-x_1)(x-x_2) [/latex]. So it's a very strong condition.

    Yeah that's the point I'm making, the function they've given proves it for
    the function η(x) = (x - x₁)(x₂ - x) > 0. Obviously you can reverse this to
    show it holds for η(x) = (x₁ - x)(x₂ - x) < 0.I take it that these serve as a
    means to show the therem is supposed to hold for any arbitrary positive &
    negative function. My problem is both with the case of η(x) = 0 on [a,b] &
    further with the case where η(x) = 0 when f(x) ≠ 0 (& vice versa) which is
    clearer by forming a Riemann sum (as I did at the end of the post), it's the
    ANY η function stipulation I can't justify.
    Fremen wrote: »
    [1]: I'm not sure why they would do this. Maybe they're trying to make the steps of the proof clear I'd guess they're integrating by parts, are they?

    I think they specified this function because of the a < b condition, it is a
    clear way to make the function positive by choosing (x - x₁)(x₂ - x), I
    think you can justify this because the important step of the proof is to
    show that we contradict ∫ₐᵇη(x)f(x)dx = 0 by having a positive functionη
    & also a positive f. But I think this might be flawed in the ways I've
    mentioned.
    Fremen wrote: »
    If you proved the result for all polynomials (x-x1)(x-x2)...(x-xn), then the theorem would hold by linearity of integration and the Stone-Weierstrass theorem.

    Well ignoring the η = 0 case & the jigsaw case I illustrated in the picture
    below I see what you mean, I think the 'proof' I gave in [3] would also do
    it unless you have a problem with it.
    Fremen wrote: »
    [2]:

    There's an analogy with vector spaces here (actually, it's more than an analogy, what we're looking at is a vector space modulo some technicalities). One way to prove that a vector V is 0 is to show that V.X = 0 for all X (this is the dot product, which I'm not bothered writing in latex).

    It's not sufficient to pick X=0, take the dot product V.0 = 0 and conclude that V=0. It has to hold for all X at the same time.

    Same thing with functions.

    But here I don't see the flaw with choosing η(x) = 0 when f(x) ≠ 0 &
    f(x) = 0 when η(x) ≠ 0 on [a,b], that way you still satisfy all the
    conditions of the hypothesis but clearly f(x) ≠ 0.
    But since the hypothesis is assumed to be true for ALL \eta, then our assumption that f(t) is non zero must in fact be false.

    Surely assuming the hypothesis is true for ALL η means we're including
    those η that have at least one value equal to zero on [a,b], say η(x₃) = 0,
    where a < x₃ < b, then:

    ∑η(xᵢ)f(xᵢ)δxᵢ = η(x₁)·f(x₁)δx₁ + η(x₂)·f(x₂)δx₂ + η(x₃)·f(x₃)δx₃ + ... = η(x₁)·0δx₁ + η(x₂)·0δx₂ + 0·f(x₃)δx₃ + ... = 0

    But here we see that f(x₃) could be anything, i.e. f(x) ≠ 0 on [a,b].
    Now, surely there are many, many, continuous functions η that exist
    on [a,b] that allow for a non-zero continuous function f of the form:

    6034073

    So f(x) = 0 when η(x) ≠ 0 & η(x) = 0 when f(x) ≠ 0.
    Here lim ∑η(xᵢ)f(xᵢ)δxᵢ = ∫ₐᵇη(x)f(x)dx = 0 but f(x) ≠ 0.
    I just don't see the problem with this, we satisfy the hypothesis but f(x) ≠ 0.

    The point is that if I arbitrarily pick η = 0 on all of [a,b] I don't see how
    the proof in the book (or even the theorem) is justified & I also don't see
    how the theorem holds for functions that obey the kind of thing going on
    in the picture, it can't be true for every function η only some η, I interpret
    the inclusion of the word "arbitrary" as meaning it doesn't matter what
    function η we pick the theorem should still hold, is that incorrect? If not
    then what's wrong with these 'counterexamples'?


  • Moderators, Science, Health & Environment Moderators Posts: 1,852 Mod ✭✭✭✭Michael Collins


    The theorem doesn't say: "For any n(x) if the integral is zero, then f(x) = 0 everywhere in that interval".

    It says:

    If we do this integral for every n(x) we can possibly think of (subject to the continuity constraint), and we find the integral comes out to be zero each time, then f(x) must be identically zero in that interval.*

    To prove this it uses contradiction:

    1) So we assume the integral does come out to be zero for every n(x) we can think of, but that the function f(x) is not identically zero. (This is the premise which we'll prove to be false by arriving at a contradiction.)

    2) Now if we are to prove the premise 1 above, we must show that for every function n(x), the integral of some non-identically zero function f(x) when multiplied by n(x), comes out to be zero.

    3) But if we are to disprove this premise, all we need is one counter-example. And they find it: (x-x1)(x-x2).

    *Continuity is very much necessary here. If continuity wasn't the case, then this would be false, since the function

    f(x) = 1 at some point in a<x<b but f(x) = 0 everywhere else in a<x<b

    would integrate to zero for every n(x) (even n(x) = 1), but clearly it's not identically zero in that interval!


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 3,038 ✭✭✭sponsoredwalk


    Where do the conditions I explained, with reference to the picture, at the
    end of my last post fail to satisfy your criteria?

    I know what's going on but I believe I've found a counter-example that
    satisfies the conditions of the hypothesis (continuity, a < b, η(a) = η(b) = 0)
    but still doesn't imply that f(x) = 0 on [a,b]. Could you point out how my
    apparent counter-example in the picture fails in some way as I can't!?


  • Moderators, Science, Health & Environment Moderators Posts: 1,852 Mod ✭✭✭✭Michael Collins


    Where do the conditions I explained, with reference to the picture, at the
    end of my last post fail to satisfy your criteria?

    I know what's going on but I believe I've found a counter-example that
    satisfies the conditions of the hypothesis (continuity, a < b, η(a) = η(b) = 0)
    but still doesn't imply that f(x) = 0 on [a,b]. Could you point out how my
    apparent counter-example in the picture fails in some way as I can't!?

    Because your counter-example is only one specific n(x), the theorem requires the integral to be zero for every n(x), not just your one, but any you can possibly imagine. They say this by using the phrase "...any arbitrary function n(x)..." but maybe it'd be better stated "...every arbitrary function n(x)...".


  • Registered Users, Registered Users 2 Posts: 2,481 ✭✭✭Fremen


    Using the vector analogy, you're setting

    v = (17,21,3,0) and eta = (0,0,0,5), say.

    eta . v = 0 for this choice of eta.

    However, pick

    eta2 = (1,0,0,0), and

    eta2.v = 17

    so it's not true that

    x.v = 0, for all x so we can't conclude that v is 0.

    If you want to convert this into an almost-rigorous example, divide the interval [0,1] up into four intervals I1, I2, I3 and I4. Let v(x) take the value 17 on most of the interval I1, 21 on most of I2, 3 on most of I3, and 0 on most of I4. Interpolate smoothly between the values. Similarly with eta and eta2. The integrals will be nearly 0 and nearly 17 respectively. Hope that makes sense to you.


  • Registered Users, Registered Users 2 Posts: 3,038 ✭✭✭sponsoredwalk


    Ah, I get it now. I was slightly misinterpreting it, so while I could find
    a function f to make f(x) = 0 when η(x) ≠ 0 & η(x) = 0 when f(x) ≠ 0,
    that same function f will not have ∫ₐᵇη(x)f(x)dx = 0 for a different choice of
    η. It's like I stopped thinking halfway through a sentence because this
    should have been immediately apparent to me! :o


  • Registered Users, Registered Users 2 Posts: 2,481 ✭✭✭Fremen


    The phrase "for any X" has tripped me up in the past too. My natural interpretation would have been the one you used, but that's not the standard interpretation in maths. Maybe it's an Irish thing, I dunno.


  • Advertisement
Advertisement