Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Forms?

  • 03-03-2011 6:14am
    #1
    Registered Users, Registered Users 2 Posts: 3,038 ✭✭✭


    Hi, I'm having some trouble understanding what's going on when integrating
    the region M on page 10 of this pdf, It may just be the language.

    ƒ : ℝ² → ℝ defined by (x,y) ↦ z = ƒ(x,y) = y² is the function we're
    integrating over the top half of the unit circle.

    1: I think what he's trying to communicate in this derivation is the
    standard double integral, [latex] \int \ \int_M \ f(x,y) \ dA \ = \ \int \ \int_M \ y^2 \ dy \ dx[/latex].
    Is that correct? You'll notice he jumps straight into his paramaterization
    but would what I've just done here be right?

    2: If so then would the bounds on the integral become:

    [latex] \int \ \int_M \ f(x,y) \ dA \ = \ \int_{-1}^1 \ \int_0^{( \sqrt{1 - x^2})} \ y^2 \ dy \ dx [/latex] ?

    3: If that is correct then I think it would explain why the author chose to
    set up a paramaterization of the region M. When he goes on to show that
    the unit circle can be paramaterized in different ways it reduces a double
    integral to a single integral & is just easier. Is that why?

    4: I've never seen anyone paramaterize double integrals in the way he
    does, could you recommend some reading material that explains what he
    is doing as I can't seem to find any myself.

    I have more questions, mainly to do with pages 11-14 where, I think, he is
    deriving differential forms (in my meagre estimation) but I'll hold off for
    now, thanks for any assistance! :D


Comments

  • Registered Users, Registered Users 2 Posts: 2,149 ✭✭✭ZorbaTehZ


    The equation of the unit circle centred at the origin is:
    [latex]x^2+y^2=1 \Rightarrow y^2=1-x^2 \Rightarrow y=\pm \sqrt[1-x^2][/latex]
    That's where he gets it from.
    Btw, this isn't a double integral - the integral is taken over a _path_ in R^3 so it reduces to a single integral. The integral you wrote in your post is over the (half) unit disk.

    Edit:fix


  • Registered Users, Registered Users 2 Posts: 3,038 ✭✭✭sponsoredwalk


    I literally stopped just short of learning about line integrals properly, I mean
    I can compute them in a physicsey sense, i.e. the dot product in Work etc...

    Is all of this just an example of a line integral? Looking at the derivation
    he gives on page 13 he arrives with something that looks to me like a
    differential form, but just browsing this page just now the form of these line
    integrals look identical to me to what is derived on page 13, is that right?

    Thinking about it, I've seen a 1-form defined as Adx + bdy + Cdz & read
    explicit reference to Work in physics as an analog, are differential forms
    like line integrals?


  • Registered Users, Registered Users 2 Posts: 1,005 ✭✭✭Enkidu


    Thinking about it, I've seen a 1-form defined as Adx + bdy + Cdz & read
    explicit reference to Work in physics as an analog, are differential forms
    like line integrals?
    Differential forms are basic geometric objects that come in grades or ranks, called 1-forms, 2-forms, e.t.c.

    A 1-form is something that can be integrated along a line, a 2-form on a surface e.t.c.

    Basically Adx + Bdy + Cdz is a 1-form, stick an integral in front and you have a line intgeral.

    Forms are defined properly in differential geometry, you need notions of tangent spaces and grassmann algebras to do them right.


  • Registered Users, Registered Users 2 Posts: 3,038 ✭✭✭sponsoredwalk


    Alright I found a bucket load of material verifying that this is line integral
    stuff, thanks a lot!
    Enkidu wrote: »
    A 1-form is something that can be integrated along a line, a 2-form on a surface e.t.c.

    As I understand it a manifold is basically just curved space (I read that a
    parabola & a circle are 1 dimensional curved manifolds, a sphere is a 2-D one
    etc...) so is that why forms are useful, because they integrate well along
    parabola's et al...?


  • Registered Users, Registered Users 2 Posts: 1,005 ✭✭✭Enkidu


    Alright I found a bucket load of material verifying that this is line integral
    stuff, thanks a lot!



    As I understand it a manifold is basically just curved space (I read that a
    parabola & a circle are 1 dimensional curved manifolds, a sphere is a 2-D one
    etc...) so is that why forms are useful, because they integrate well along
    parabola's et al...?
    Actually manifolds can be flat. A Manifold is any space that is "nice enough" that you can define calculus on it. A sphere is a perfect example. Calculus is intially only defined on [latex]R^{n}[/latex], you are already familiar with it. However a sphere is locally "similar enough" to [latex]R^{2}[/latex] that you can transfer calculus from [latex]R^{2}[/latex] to the sphere.

    Anyway in general a manifold is an n-dimensional space similar enough to [latex]R^{n}[/latex] to transfer calculus over to it.

    Basically when you want to start integrating in these spaces you want ways of performing line integrals and surface integrals and higher dimensional versions. To generalise these from [latex]R^{n}[/latex] to the manifolds you need forms, which are geometric objects like vectors.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 3,038 ✭✭✭sponsoredwalk


    I think I finally understand the wedge product & think it explains things
    in 2-forms that have been puzzling me for a long time.

    If
    v = v₁e₁ + v₂e
    w = w₁e₁ + w₂e

    where e₁ = (1,0) & e₂ = (0,1) then

    v w = (v₁e₁ + v₂e₂) ⋀ (w₁e₁ + w₂e₂)
    _____ = v₁w₁e₁⋀e₁ + v₁w₂e₁⋀e₂ + v₂w₁e₂⋀e₁ + v₂w₂e₂⋀e
    _____ = v₁w₂e₁⋀e₂ + v₂w₁e₂⋀e
    v w= (v₁w₂ - v₂w₁)e₁⋀e

    This is interpreted as the area contained in v & w.

    My question is based on the fact that this is a two dimensional calculation
    that comes out with the exact same result as the cross product of
    v'
    _= v₁e₁ +_v₂e_+ 0e
    w' = w₁e₁ + w₂e₂ + 0e

    Also the general xy = (x₁e₁ + x₂e₂ + xe₃) ⋀ (y₁e₁ + y₂e₂ + ye₃)
    comes out with the exact same result as the cross product.

    In all cases the end result is a vector orthogonal to v & w, or to v'& w',
    or to x'& y'. Is this true for every wedge product calculation in every
    dimension?
    The wedge product of two vectors in ℝ³ gives the area
    of parallelogram they enclose & it can be interpreted as a scaled up factor
    of a basis vector orthogonal to the vectors. So (e₁⋀e₂) is an orthogonal
    unit vector to v & w & (v₁w₂ - v₂w₁) is a scalar that also gives the area
    enclosed in v & w.

    Judging by this you'd take the wedge product of 3 vectors in ℝ⁴ &
    get the volume they enclose, and 4 vectors in ℝ⁵ gives hypervolume or
    whatever. If we ended up with β(e₁⋀e₂⋀e₃) this would be in ℝ⁴ where β is
    the scalar representing the volume & β(e₁⋀e₂⋀e₃) is pointing off into
    the foursth dimension whatever that looks like. If all of this holds I can
    justify why e₁⋀e₂ = - e₂⋀e₁ both mentally & algebraically by taking dot
    products & finding those orthogonal vectors so I'd like to hear if this
    makes sense in the grand scheme of things!

    I really despise taking things like e₁⋀e₂ = - e₂⋀e₁ as definitions unless I
    can justify them. I can algebraically justify why e₁⋀e₂ = - e₂⋀e₁ by
    thinking in terms of the cross product which itself is nothing more than
    clever use of the inner product of two orthogonal vectors. Therefore I
    think that e₁⋀e₂ literally represents the unit vector that is orthogonal to
    the vectors v & w involved in my calculation. So if there are n - 1 vectors
    then e₁⋀e₂⋀...⋀en lies in ℝⁿ.

    I read a comment that the wedge product is in an "exterior square" so I
    guess this generalizes to products of all arity (exterior volumes et al) &
    from browsing I've seen that a "bivector" is a way to interpret this, like
    this:

    170px-Wedge_product.JPG

    it's a 2 dimensional vector. Still, if I were to just think in terms of
    orthogonality as I have explained in this thread is there any deficiency?
    As far as I can tell this 2-D vector in the picture is just a visual
    representation of the area & as it is explained via a scaled up orthogonal
    vector I think there is virtually no difference.

    A lot of the wiki topics on "bivectors" and forms etc... were previously
    unreadible to me & are only now slowly beginning to make sense (I hope!).

    There are just two questions there in bold & the rest is me discussing
    what I know about this, please be brutal! :D


  • Registered Users, Registered Users 2 Posts: 1,005 ✭✭✭Enkidu


    Is this true for every wedge product calculation in every
    dimension?
    No, for two reasons. First of all the wedge product is defined on forms, not on vectors.

    In flat spaces however, forms are identical to vectors, so you can treat vectors like forms and in a certain sense "define" the wedge product on vectors. However when you wedge two vectors you will get a bivector. These bivectors have 1, 3, 6, 10, 15, 21,... independent components in dimensions 2, 3, 4, 5, 6, 7,....

    Now for the second reason. You can see that only for three dimensional space are the number of independent components equal to the dimension of the space itself. So in three dimensions although the wedge takes two vectors and produces a bivector, you can map this bivector back to a vector and hence define an operation that takes in two vectors and produces another. This operation is the cross product.

    However if your not in three dimensions the wedge product just produces a bivector. Even then, the wedge product only makes sense on vectors in flat-space, in curved spaces it is only defined on forms, not vectors.


  • Registered Users, Registered Users 2 Posts: 3,038 ✭✭✭sponsoredwalk


    Thanks Enkidu, I apologise for speaking as if these things are vectors
    but if you read the post I'll give below you'll understand why I'm thinking
    in terms of vectors. The similarities are just astounding & I do perceive
    something hidden in forms that is masking (something like) vector calculations.
    If you can see the analogy I'm making, that through vectors the anti-symmetric
    property of the cross product is algebraically explicit, I'm ultimately
    seeking a similar explanation for the anti-symmetric property of the
    wedge product in general that is pretty much verifiable.

    So, I've rewritten my post to make my concerns far more explicit: first a
    look at the cross product; then a look at a wedge product calculation &
    it's similarities (that I think are far more explicit if you interpret it in the
    way I've explained below) to the cross product & finally 5 questions (in
    bold) that are motivated by the wedge product calculation with unbolded
    text just elaborating on the question just in case.

    ----

    The cross product is a strange animal, it really has very little justification as it is
    taught in elementary linear algebra books. It took me a long time to learn that the
    cross product is really no more than the dot product in disguise. It is actually quite
    easy to derive the result that a cross product gives, through clever algebra, as is done
    in the cross product pdf's here & here.
    By doing your own algebra you can justify the anti-symmetric property of the cross product,
    [latex] \overline{u} \ x \ \overline{v} \ = \ - \ \overline{v} \ x \ \overline{u}[/latex]

    So understanding the cross product in this way is quite satisfying to me as we can
    easily justify why [latex] \overline{u} \ x \ \overline{u} \ = \ 0[/latex] without relying
    on these properties as definitions.

    My questions are based on the fact that these properties can be justified in such an
    elementary way. If you've never seen the cross product explained they way it is in
    the .pdf's then I urge you to read them & think seriously about it. I'm sure these are
    justified in more advanced works in other ways but if an explanation can be given
    at this level I see no reason not to take it.

    So lets look at an example & the steps taken that I think have explanations analogous
    to those of the cross product above:

    v = v₁e₁ + v₂e

    w = w₁e₁ + w₂e

    where e₁ = (1,0) & e₂ = (0,1).

    v w = (v₁e₁ + v₂e₂) ⋀ (w₁e₁ + w₂e₂)
    _____ = v₁w₁e₁⋀e₁ + v₁w₂e₁⋀e₂ + v₂w₁e₂⋀e₁ + v₂w₂e₂⋀e
    _____ = v₁w₂e₁⋀e₂ + v₂w₁e₂⋀e
    v w= (v₁w₂ - v₂w₁)e₁⋀e

    This is interpreted as the area contained in v & w.
    No doubt you noticed that all of the manipulations with the e terms have
    the exact same form as the cross product. Notice also the fact that this two
    dimensional calculation comes out with the exact same result as the cross product of

    v'_= v₁e₁ +_v₂e_+ 0e
    w' = w₁e₁ + w₂e₂ + 0e
    in ℝ³. Also the general

    xy = (x₁e₁ + x₂e₂ + xe₃) ⋀ (y₁e₁ + y₂e₂ + ye₃)

    comes out with the exact same result as the cross product. The important thing is that
    the cross product of the two vector results in a vector orthogonal to v & w and that the
    result is the same as the wedge product calculation.

    1: Can e₁ ⋀ e₂ be interpreted as e₃ in my above calculation?

    What I mean is that can e₁ ⋀ e₂ be interpreted as a (unit) vector
    orthogonal to the two vectors involved in the calculation that is scaled up by some
    factor β, i.e. βe₁ ⋀ e₂ where β is the scalar representing the
    area of the parallelogram.

    2: Just as we can algebraically validate why [latex] \overline{u} \ x \ \overline{v} \ = \ - \ \overline{v} \ x \ \overline{u}[/latex]
    why doesn't the exact same logic validate
    e₁ ⋀ e₂ = - e₁ ⋀ e₁?


    If we think along these lines I think we can justify why e₁ ⋀ e₁ = 0,
    just as it occurs analogously in the cross product. They seems far too similar for it
    to be coincidence but I can't find anyone explaining this relationship. Another way
    to say this is that if we think of question 1 where e₁ ⋀ e₂ = e
    in the vector sense then we can show that e₂ ⋀ e₁ = -e₃ or
    e₁ ⋀ e₂ = - e₁ ⋀ e₁?[/B]. I stress this point because I've
    seen it just defined in some places but to do that I think we're missing the
    chance to explicitly see the reason why the definition works.

    3: In general, if you are taking the wedge product of (n - 1) vectors in
    n-space will you always end up with a new vector orthogonal to all of
    the others?


    If you are taking the wedge product of (n - 1) vectors then will you end up
    with λ(e₁⋀e₂⋀...⋀en) where the term (e₁⋀e₂⋀...⋀en) is orthogonal to all
    the vectors involved in the calculation & the term λ represents the
    area/volume/hypervolume (etc...) contained in the (n - 1) vectors?

    (I think your latest response is saying this isn't the case in general?)

    4: I have seen it explained that we can interpret the wedge product of e₁ ⋀ e
    as in the picture here, as a kind of two-dimensional vector.
    Still, the result given is no different to that of the 3-D cross product so is it not
    justifiable to think of e₁ ⋀ e₂ as if it were just an orthogonal vector in the
    same way you would the cross product if you think along the lines I have been tracing
    out in this post? When you go on to take the wedge product of (n - 1) vectors in n-space
    can I not think in the same (higher dimensional) way?


    I think this relates to question 3 in that I am asking can the interpretation
    as an orthogonal vector not work in general, but is the fact that the
    independent components are as varied as you say the reason why this
    wont work?

    5: Are calulations like dxdx = dydy = dzdz = 0, dxdy = -dydx etc...
    just encoding within them rules that logically follow from calculations
    dealing with orthogonality?


    Since:
    1) Adx + Bdy + Cdz & Adydz + Bdzdx + Cdxdy are differential forms,
    2) a 1-form can be thought of analogous to the concept of work in physics,
    3) work in physics can be formulated as a vector dot product,
    4) the vector (cross) product actually encodes rules like i x i = j x j = k x k= 0, i x j = -j x i
    which are so similar to dzdz = 0, dxdy = -dydx etc...

    it seems far too much of a coincidence to me that things like e₁ ⋀ e₂ = - e₁ ⋀ e
    need to be definitions when in the analagous vector formulations there are
    rich explanations that are simply derived from orthogonality calculations (as
    in the pdf's). There must be a general mode of approach to these
    questions in the wedge product/forms methods also using concepts of
    orthogonality & there must be some way to show things like
    e₁ ⋀ e₂ = - e₂ ⋀ e₁ and higher dimensional generalizations
    just using orthogonality considerations. I suppose you could view this question
    as me asking: if we view e₁ ⋀ e₂ like vectors & dxdy = -dydx like
    their scalar representations then we can go from vectors to scalars & see why the
    anti-symmetric property holds. Is there not some analogous form-ey way of seeing
    why these wedge calculations follow the same rules as the scalar representations
    (dxdy = -dydx ). I'm explicitly talking about how this is defined in Edwards
    "Advanced Calculus: A Differential Forms Approach" in section 1.3 if you don't
    know what I'm talking about.


  • Registered Users, Registered Users 2 Posts: 3,038 ✭✭✭sponsoredwalk


    *Bit of reading involved here, worth it if you have any interest in, or
    knowledge of, differential forms*.

    Had to forget about my concerns regarding differential forms there for a
    while but I did a bit of reading & I am really happy to see that I had the
    right idea regarding 2-forms in a sense, but I just hadn't a clue what I was
    doing with these things or what they meant. I'm still a little unclear on an
    issue or two, please let me write out what I know about these things &
    judge it's accuracy because some of it is intuition based on what I've
    read. Also I can't find a second source that describes these things this
    way so hopefully someone will learn something :cool: If anyone find any
    source with a comparable explanation please let me know :cool:

    A single variable differential 1-form is a map of the form:

    [latex] f(x) \ dx \ : \ [a,b] \rightarrow \ \int_a^b \ f(x) \ dx[/latex]

    When you take the constant 1-form it becomes clearer:

    [latex] dx \ : \ [a,b] \rightarrow \ \int_a^b \ \ dx \ = \ \Delta x \ = \ b \ - \ a[/latex]

    Okay, didn't know that's what a form was :o Beautiful stuff! In my
    favourite kind of notation too!

    This looks an awful lot like the linear algebra idea of a linear functional in
    a vector space (V,F,σ,I):

    [latex] f \ : \ V \ \rightarrow \ F [/latex]

    where you satisfy the linearity property.

    In more than one variable you can have:

    [latex] dx \ : \ [a_1,b_1] \rightarrow \ \int_{(a_1,a_2)}^{(b_1,b_2)} \ \ dx \ = \ b_1 \ - \ a_1[/latex]

    [latex] dy \ : \ [a_2,b_2] \rightarrow \ \int_{(a_1,a_2)}^{(b_1,b_2)} \ \ dy \ = \ b_2 \ - \ a_2[/latex]

    which leads me to think that the following notation makes sense:

    [latex] dx \ + \ dy \ : \ [a_1,b_1]\times [a_2,b_2] \rightarrow \ \int_{(a_1,a_2)}^{(b_1,b_2)} \ \ dx \ + \ dy \ = \ \int_{(a_1,a_2)}^{(b_1,b_2)} \ \ dx \ + \ \int_{(a_1,a_2)}^{(b_1,b_2)} \ dy = \ (b_1 \ - \ a_1) \ + \ (b_2 \ - \ a_2) \ = \ \Delta x \ + \ \Delta y[/latex]

    The pull back stuff just "pulls" an integral in x variables "back" to 1
    parametrized variable as far as I can see.

    If [latex] \overline{a} \ =\ (a_1,a_2)[/latex] & [latex] \overline{b} \ =\ (b_1,b_2)[/latex] then:

    [latex] \lambda_1 dx \ + \ \lambda_2 dy \ : \ [a_1,b_1]\times [a_2,b_2] \rightarrow \ \int_{ \overline{a}}^{ \overline{b}} \ \ \lambda_1 dx \ + \ \lambda_2 dy \ = \ \lambda_1 \int_{(a_1,a_2)}^{(b_1,b_2)} dx \ + \ \lambda_2 \int_{(a_1,a_2)}^{(b_1,b_2)} dy = \ \lambda_1 (b_1 \ - \ a_1) \ + \ \lambda_2 (b_2 \ - \ a_2) \ = \ \lambda_1 \Delta x \ + \ \lambda_2 \ \Delta y[/latex]

    That's the notable stuff for 1-forms, also that they can be extended to
    n dimensions very explicitly with this notation & things don't have to be
    constant. The vector parallels (notably Work!) are just jumping out
    already!

    I'd like to quote the book now:

    "Differential 1-forms are mappings from directed line segments to
    the real numbers. Differential 2-forms are mappings from oriented
    triangles to the real numbers
    ".

    So, by this comment what we're doing with a differential 2-form is
    finding the area in a triangle. What do you do when you find area's?
    Use the cross product! How does the cross product work? It works
    by finding the area contained within (n - 1) vectors & expresses it
    via a vector in n-space! :cool: Furthermore from what I gather the whole
    theory is integration via simplices - p-dimensional triangles. I'm guessing
    Stokes' general theorem is proven via generalized triangulation then :pac:

    So if we have a positively oriented triangle:

    156400.png


    Which we denote by [latex] T \ = \ [ \overline{a},\overline{b},\overline{c}][/latex] (This is all done in ℝ² for now)
    what is the area of the triangle?

    [latex] A \ = \ \frac{1}{2} \cdot b \cdot h \ = \ \frac{1}{2} \cdot ( ( \overline{c} \ - \ \overline{a}) \times ( \overline{b} \ - \ \overline{a}))[/latex]

    Just extend it to 3 dimensions for the calculation & you get the result.

    If you go from [latex] \overline{a}[/latex] to [latex] \overline{b}[/latex] to [latex] \overline{c}[/latex] you have

    [latex] T \ = \ [ \overline{a},\overline{b},\overline{c}][/latex]

    which is defined as a positive orientation & if you go from [latex] \overline{a}[/latex] to [latex] \overline{c}[/latex] to [latex] \overline{b}[/latex] you have

    [latex] T \ = \ [ \overline{a},\overline{c},\overline{b}][/latex]

    which is defined as a negative orientation.

    [latex] dx \ dy \ : \ [ \overline{a},\overline{b},\overline{c}] \ \rightarrow \ 6[/latex]

    [latex] dx \ dy \ : \ [ \overline{a},\overline{c},\overline{b}] \ \rightarrow \ - 6[/latex]


    This is made clearer with the notation:

    [latex] dx \ dy \ : \ T \ \rightarrow \ \int_T \ dx \ dy \ = \ \ \frac{1}{2} \cdot ( ( \overline{c} \ - \ \overline{a}) \times ( \overline{b} \ - \ \overline{a}))[/latex]

    All of this I think I understand.

    The next issue is defining 2-forms in 3 dimensional space. There is
    talk of projections and such, I don't quite understand what's going on
    though.

    The projection of a point (x,y,z) onto the x-y plane is (x,y,0).

    The projection of the triangle

    [latex] T \ = \ [ \overline{a},\overline{b},\overline{c}] \ = \ [(a_1,a_2,a_3),(b_1,b_2,b_3),(c_1,c_2,c_3)][/latex]

    onto the x-y plane is

    [latex] T \ = \ [ \overline{a},\overline{b},\overline{c}] \ = \ [(a_1,a_2,0),(b_1,b_2,0),(c_1,c_2,0)][/latex].

    They say that they will define the differential form [latex]dx \ dy[/latex]
    to be the mapping from the oriented triangle [latex]T[/latex] to the
    signed area of it's projection onto the x-y plane,
    which is the z coordinate of [latex] \frac{1}{2} \cdot ( ( \overline{c} \ - \ \overline{a}) \times ( \overline{b} \ - \ \overline{a}))[/latex] :confused:

    That doesn't make much sense, but I read on & see that

    [latex] \frac{1}{2} \cdot ( ( \overline{c} \ - \ \overline{a}) \times ( \overline{b} \ - \ \overline{a})) \ = \ ( \int_T dy \ dz, \int_T dz \ dx, \int_T dx \ dy)[/latex]

    Now, this makes sense in that what is orthogonal to dx dy is something
    in the z coordinate, and what is orthogonal to dy dz is in the x
    coordinate etc... But what does that justification paragraph actually say?
    I get the feeling it's an insight I should know about, I don't understand
    what's going on with the projections. I think that if I did I would have
    predicted the integrals in the coordinates the way it's set up there!

    Let me quote the actual paragraph in it's entirety just in case:
    We define the differential 2-form [latex]dx \ dy[/latex] in 2 dimensional
    space to be the mapping from an oriented triangle [latex] T \ = \ [ \overline{a},\overline{b},\overline{c}][/latex]
    to the signed area of it's projection onto the x,y plane, which is the z
    coordinate of
    [latex] \frac{1}{2} \cdot ( ( \overline{c} \ - \ \overline{a}) \times ( \overline{b} \ - \ \overline{a}))[/latex].
    Similarly, [latex]dz \ dx [/latex] maps this triangle to the signed area
    of it's projection onto the z,x plane which is the y coordinate of
    [latex] \frac{1}{2} \cdot ( ( \overline{c} \ - \ \overline{a}) \times ( \overline{b} \ - \ \overline{a}))[/latex].
    The 2-form [latex]dy \ dz [/latex] maps this triangle to the signed area
    of it's projection onto the y,z plane, the x coordinate of
    [latex] \frac{1}{2} \cdot ( ( \overline{c} \ - \ \overline{a}) \times ( \overline{b} \ - \ \overline{a}))[/latex].
    "Second Year Calculus - D. Bressoud".
    Also, based on all of that writing I still don't know why dx dy = - dy dx :(

    What I mean by this is that after all that good work he just defines
    dx dy to be the area of that triangle, I think you can see my problem
    lies with the projection issues. I think if I understand what's going on
    with the projections I'll get it.

    I can justify things now in a sense because I know dy dx goes to the
    z dimension with the negative of what happens when dx dy goes to
    the z-axis (from the algebra involved in the cross product derivation
    via the orthogonality of the dot product) but I still feel like something
    is missing.

    ----

    Edit: I know why I don't feel very confident about this, it's
    because of orientation!


    [latex] dx \ dy \ : \ [ \overline{a},\overline{b},\overline{c}] \ \rightarrow \ 6[/latex]

    [latex] dx \ dy * \ : \ [ \overline{a},\overline{c},\overline{b}] \ \rightarrow \ - 6[/latex]

    I mean dx dy = 6 = - (-6) = - dx dy * so I just feel a little iffy about
    throwing out minus signs to justify anti-commutativity issues!
    (Could just be irrelevant due to my misunderstanding things).

    So, the question is just about the general correctness of what I wrote
    & then the issue of projections, I couldn't just post a question about
    projections because I'm not 100% sure my take on the theory that
    leads up to this is 100% accurate (I think it is though!) & to be quite
    honest seeing as I have spent ages trying to find someone who would
    explain the theory in this way, but having been unable to find anyone
    who would, makes me think very few people view this subject this way
    & as such would love to see how different it is for someone who takes
    the axiomatic, anti-commutative, definitions that are found in nearly all
    of the books on google. If this is/isn't new please let me know anyway
    (and help if possible :o)! :D


  • Registered Users, Registered Users 2 Posts: 1,005 ✭✭✭Enkidu


    That's a lovely presentation of differential forms sponsoredwalk. It is very different to how I would understand it from a purely axiomatic differential geometry point of view, although I can see the parallels.

    I would understand a differential form as a linear functional on the space of vectors. This then gives it transformation properties under coordinate changes that matches those of an integral. So in essence it "is" an integral. I'll play around with your stuff an try to match it to the axiomatic stuff. I'll set a Wednesday deadline for a post on what I find.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 3,038 ✭✭✭sponsoredwalk


    There's a serious problem with the notation as it stands & I've been
    trying to figure it out over the week but haven't gotten very far.
    A single variable differential 1-form is a map of the form:

    [latex] f(x) \ dx \ : \ [a,b] \rightarrow \ \int_a^b \ f(x) \ dx[/latex]

    When you take the constant 1-form it becomes clearer:

    [latex] dx \ : \ [a,b] \rightarrow \ \int_a^b \ \ dx \ = \ \Delta x \ = \ b \ - \ a[/latex]

    Now, if we write a function in standard notation:

    [latex] f \ : \ \mathbb{R} \ \rightarrow \ \mathbb{R} [/latex]

    where

    [latex] f \ : \ x \ \mapsto \ f(x) \ = \ y[/latex];

    All concatenated into:

    [latex] f \ : \ \mathbb{R} \ \rightarrow \ \mathbb{R} \ | \ x \ \mapsto \ f(x) \ = \ y[/latex];

    we see it's very different from notation of the form:

    [latex] f(x) \ dx \ : \ [a,b] \rightarrow \ \int_a^b \ f(x) \ dx[/latex].

    Now, the notation for a linear functional is of the form:

    [latex] f \ : \ \mathbb{V} \ \rightarrow \mathbb{R} \ | \ \overline{v} \ \mapsto \ f( \overline{v}) \ = \ v[/latex]

    where [latex] \mathbb{v}[/latex] is a vector space, [latex] \mathbb{R}[/latex] is the reals, [latex] \overline{v} [/latex]
    is a vector & v as a scalar, I'm trying to put it all together, i.e. the
    interval [a,b], functions of the form Pdx + Qdy + Rdz etc... .

    This is going to have to be done carefully, I don't want to lose this!
    I have an idea of maybe taking the interval [a,b] = I and mapping it
    to a subset of Rⁿ to create functions of the form Pdx + Qdy + Rdz
    because:
    Definition 5.1.1 A 1−form φ on U ⊆ Rⁿ ( either n = 2 or n = 3) assigns, for every
    p ∈ U ⊆ Rⁿ, a is a linear map φ|p : Rⁿ → R.
    link:
    by this definition I think we can justify the appearance of weird terms
    that need to be integrated. Then since the integral of a differential form
    is so like a linear functional we can create a map like φ : ℝⁿ → ℝ to get
    the scalar value for the integral. What I'm trying to hint at is a
    composition of maps going from I → ℝⁿ → ℝ. The following passage
    kind of gave me this idea:

    Here, where [latex] \alpha \ = \ \sum_i \ f_i dx_i[/latex], is one discussion:
    Let U be an open subset of Rⁿ. A parametrized curve in U is a smooth
    mapping c : I → U from an interval I into U. We want to integrate over I.
    To avoid problems with improper integrals we assume I to be closed and
    bounded, I = [a,b]. (Strictly speaking we have not defned what we mean
    by a smooth map c : [a,b] → U. The easiest defnition is that c should be
    the restriction of a smooth map c⁰ : (a - ε, b + ε) → U defined on a
    slightly larger open interval.) Let be a 1-form on U. The pullback c*α is
    a 1-form on [a,b], and can therefore be written as c*α = g dt
    (where t is the coordinate on ℝ). The integral of over c is now defined by

    [latex] \int_c \alpha \ = \int_{[a,b]} \ c* \alpha \ = \ \int_a^b g(t)dt [/latex]
    http://www2.bc.cc.ca.us/resperic/mathb6c/DifferentialForms.pdf
    By using this idea I think we can create a more rigorous idea, if
    c : I → U | t ↦ c(t) & if φ : ℝⁿ → ℝ then we can create φ o c : I → ℝ.
    Now I know that's wrong & there's no dx's anywhere & that I think
    the c is supposed to be dot-product'ed with dx or something but you
    get the gist of what I'm saying.

    I don't see how

    [latex] f(x) \ dx \ : \ [a,b] \rightarrow \ \int_a^b \ f(x) \ dx[/latex]

    makes any sense though, a map takes an element of an interval (set)
    and maps it to an element of the other set but here it seems to be
    mapping an interval to a scalar number & it only makes sense to me
    if you make the interval (b - a) into a vector v (an idea which can
    extend this stuff into higher dimensions!) & then view dx as a function
    taking in v & spitting out a scalar value, i.e.

    [latex] dx \ : \ \mathbb{V} \ \rightarrow \mathbb{R} \ | \ \overline{v} \ \mapsto \ dx( \overline{v}) \ = \ b \ - \ a \ = \ \Delta x[/latex]


    which is a scalar. So if you are talking about functions like
    Pdx + Qdy + Rdz & maps like f(x)dx : ... I mean aren't you going to need
    an intermediate set, something like I → ℝⁿ → ℝ? Hopefully that makes
    sense, let me know what you think!

    Btw, this is an aside from the other perplexing question about minus
    sings, anti-commutativity & orientation :P Tough stuff!


  • Registered Users, Registered Users 2 Posts: 3,038 ✭✭✭sponsoredwalk


    This is pretty good:
    3. What do forms do?
    But what does a 1-form do? For example, a function f can be applied to
    a number x to produce another number, f(x). What can a form be applied
    to? The answer is: a k-form, where 0 k 3, acts on k-tuples of
    vectors (v¹, . . . , v) and the output it produces is a real number.
    For example, if ω is a 1-form and v ³ a vector, then ω(v) is defined
    and is just a real number. Similarly, if Φ is a 2-form, then for any two
    vectors v,w, Φ(v,w) is a real number, and so on.

    Furthermore, ω(v + w) = ω(v) + ω(w) and ω(tv) = tω(v), for any scalar
    t. In other words, ω is a linear function on ℝ³.

    It’s a little more complicated with 2-forms Φ, since they depend on two
    vectors. Now, if we fix w, then v Φ(v,w) is a linear function on ℝ³ and
    if we fix v, then w Φ(v,w) is also a linear function. This can be
    summarized by saying that Φ is bilinear. Moreover, Φ(w,v) = −Φ(v,w),
    i.e., Φ is anti-symmetric. (Note that Φ(v,v) is always zero.)

    But how do we compute ω(v)? For that, we need to know what dx(v),
    dy(v), and dz(v) are. The answer is easy: dx(v) is just the x - (i.e., first)
    component of v, dy(v) is the y-component of v, etc... .

    For example if ω = y dx + z dy − Πdz and v = (−1, 0, 2), then
    ω
    (v) = y dx(v) + z dy(v) − Π dz(v) = −y − 2Π.

    ...

    6. Integration of forms
    A k-form can be integrated over a (piecewise smooth) k-dimensional
    object. For instance, 1-forms are integrated over curves, 2-forms over
    surfaces, and 3-forms over 3-dimensional solids. We will only define the
    integral of a 1-form ω over a curve C. If C is parametrized by
    γ : [a, b] ³, i.e., C = {γ(t) : a t b}, then

    [latex] \int_C \ \omega \ = \ \int_a^b \ \omega( \gamma ' (t))dt [/latex]

    Observe that ω(γ'(t)) is just a scalar function of t and the integral on the
    right-hand side is just the ordinary Riemann integral. It can be shown that
    [latex]\int_C \ \omega [/latex] does not depend on the choice of the
    parametrization of C.
    link
    Also page 116 of this book seems to answer the question about
    projections & areas (I think, have to do it all properly).


    So there's a looming question about

    dx : [a,b] → ∫f(x) dx = b - a = Δx

    or rather

    f(x)dx : [a,b] → ∫f(x) dx

    and why my first post on this has the author defining differential forms
    to be integrals as just described here while the last few posts have the
    authors defining forms without mentioning integrals :confused: I think [a,b] is
    just the x-axis vector whose length is (b - a) & so:

    ω = y dx + xdy : [a,b] x [c,d] ℝ | v = (b - a, d - c) ↦ y dx(v) + xdy(v).

    But since
    f(x)dx : [a,b] → ∫f(x)dx it could be something like:

    ω = y dx + xdy : [a,b] x [c,d] ℝ | v = (b - a, d - c) ↦ y dx(v) + xdy(v).

    I don't know.

    /tired...


  • Registered Users, Registered Users 2 Posts: 1,005 ✭✭✭Enkidu


    Hey sponsoredwalk,

    I haven't forgot about this. However there is a lot of stuff here, so I'll take it slowly.

    Let's just take the initial statement, that forms are maps from line segments to real numbers.
    This is prefectly true, in fact in a general space a one-form maps any line (that is any one dimensional object)
    to the real numbers. They are, as you guessed, a linear functional on the space of lines. Similarly
    two-forms are dual to directed areas, three-forms are dual to directed volumes.

    Focusing on a two form, we can see why triangles are involved. Two-forms are dual to all directed areas, not just
    triangles. However to specify the direction of an area you need three points, the most basic area based around three points
    is a triangle, so in a sense triangles are the most basic shapes two-forms act on.

    As you might guess, three forms are dual to terasects, the most basic three dimensional directed object.

    Now the wedge product anti-commutes [latex]dx \wedge dy = - dy \wedge dx[/latex], because every property of
    directed areas has to be reflected in some property of the forms, in order for them to match as duals. So for a
    directed area, taking its three points, it can have the orientation (abc) or (cba), clockwise and anti-clockwise
    shall we say. Hence a form has to have two opposite orientations [latex]dx \wedge dy[/latex] and [latex]dy \wedge dx[/latex].

    Every form has what is called its exterior derivative, this is a form one-step larger. So the exterior derivative
    of a one form will be a two form, e.t.c. The exterior derivative of a form is basically the gradient of the form with
    the algebra of the wedge product encoded. (If you didn't encode the wedge product you wouldn't get a form).

    However, every procedure on forms has its equivalent in directed lines, surfaces, volumes, e.t.c. (that is directed manifolds) The dual of the
    exterior derivative is finding the boundary of a manifold.

    So let [latex]\omega[/latex] be a form, [latex]d\omega[/latex] be its exterior derivative, [latex]S[/latex] be a manifold and
    [latex]\partial S[/latex] be its boundary, then the general version of Stoke's theorem is:
    [latex]\int_{S}d\omega = \int_{\partial S}\omega[/latex].

    So a form integrated on a boundary of a manifold is the same as its exterior derivative integrated on the manifold.

    Also, just like the boundary of the boundary of a manifold is zero, the exterior derivative of the exterior derivative of a form is zero.

    So hopefully this makes the case that n-forms are linear functionals on, or dual to, directed n-dimensional manifolds.


  • Registered Users, Registered Users 2 Posts: 3,038 ✭✭✭sponsoredwalk


    Your point about three points to orient a triangle was great, thanks for
    that. I see that this is why simplexes are so useful now.

    Well I've got to be honest, I haven't been so interested in 2-forms at all
    since this thread because I've discovered a far deeper problem - notation.
    I just can't understand the notation of a mapping to a mapping of linear
    functionals (honestly not even sure that's correct!).

    Basically I've got about 3 definitions that, to me, conflict or just don't say
    the exact same thing (despite apparently being definitions of the same
    thing). All I'm trying to do is write differential forms in the following
    notation: dx : [a,b] → ∫f(x) dx = b - a = Δx, but in the correct way that
    explains why all of the other forms of notation in this thread are either
    right or wrong. I think that if we can crack this part we'll be able to build
    up a proper foundation to account for the perspective of forms as given in
    my original post & explain areas algebraically, integration of forms
    adequately etc.. Let me give you the definitions:
    Definition: Let Ω be an open set of ℝⁿ.

    (i) A vector field in Ω is a map F : Ω → ℝⁿ.

    (ii) A differential form ω in Ω is a map ω : Ω → L(ℝⁿ,ℝ) that associates to every x ∈ Ω a linear map ω(x) : ℝⁿ → ℝ.

    Hence, in coordinates a differential form can be written as ω(x) =∑ᵢⁿ ωᵢ(x) dxᵢ, ωᵢ(x) := < ω(x), eᵢ >, x ∈ Ω,

    and a vector field as F(x) = (F(x)¹, F(x)², . . . , F(x)ⁿ).

    If ω is a differential form on Ω, and F is the (unique) vector field on Ω such that
    < ω(x) , h >= F(x) • h (∀h ∈ ℝⁿ) (∀x ∈ Ω),
    we say that F is the vector field associated to ω or that ω is the differential form associated
    to F. We say that a differential form is of class Cˣ if its components in a basis are of class
    Cˣ. Notice that a differential form is of class Cˣ if and only if its associated vector field is
    of class Cˣ.
    link
    This is the kind of definition I'm looking for, but there are qualifications
    & issues with this. For example this:
    A differential p-form on a set S ⊂ ℝⁿ is a function

    [latex] \omega : S \to \ A^p [/latex]

    i.e.

    [latex] \overline{ \omega}( \overline{x}) \ = \ \Sigma _{( i_{1} < ... < i_{p})} \omega_{( i_{1}, ... ,i_{p})}( \overline{x})dx_{i_{1}}\wedge ... \wedge dx_{i_{p}} [/latex]

    where

    [latex] \omega_{( i_{1}, ... i_{p})} \ : \ \mathbb{R}^n \ \to \ \mathbb{R}[/latex]
    Check page 76 for clarificaton!
    includes more information than the first definition I gave, it includes the
    order of the form. So here already you see an issue, two definitions in
    two different sources giving apparently conflicting, or inadequate,
    definitions in some respect. This one, in particular, says we're mapping
    to A^p, I mean wtf... This isn't very clear, but the first definition & the
    .pdf I linked to below clearly states that this is mapped to a map going
    from some crazy cartesian product to the real line.

    So before going on any further I think the goal is to get a mix of these
    two definitions, including every piece of information in as compact a form
    as is possible. The following three colour-coded sections are basically the
    issues I have with all of this that you might be able to help me with:






    To try & remedy the first definition we can take:
    Definition: Let Ω be an open set of ℝⁿ.

    A differential form ω in Ω is a map ω : Ω → L(ℝⁿ,ℝ) that associates to every x ∈ Ω a linear map ω(x) : ℝⁿ → ℝ.

    where:

    ω(x) = ∑ᵢⁿ ωᵢ(x) dxᵢ,

    ωᵢ(x) := < ω(x), eᵢ >, x ∈ Ω,
    We can write:

    ω : Ω → L(ℝⁿ,ℝ) | xω(x) = ∑ᵢⁿ ωᵢ(x) dxᵢ

    &

    ω : Ω → L(ℝ[latex]^{pn}[/latex],ℝ) | x ↦ [latex] \overline{ \omega}( \overline{x}) \ = \ \Sigma _{( i_{1} < ... < i_{p})} \omega_{( i_{1}, ... ,i_{p})}( \overline{x})dx_{i_{1}}\wedge ... \wedge dx_{i_{p}} [/latex].

    By including the ω : Ω → L(ℝ[latex]^{pn}[/latex],ℝ) & the crazy wedge
    summation term you see from the second definition I think we've got
    something pretty good.












    This section is the mechanics of using the above definition on a 1-form,
    unfortunately I run into an apparent contradiction as you'll see below.

    If you're given ω = adx + bdy + cdz you can translate it into the above
    notation as:

    ω : Ω → L(ℝ³,ℝ) | xω(x) = ∑ᵢ³ ωᵢ(x) dxᵢ = adx + bdy + cdz.
    (Let adx₁ + bdx₂ + cdx₃ = adx + bdy + cdz for clarity).

    So that means the:
    ωᵢ(x) := < ω(x), eᵢ >, x ∈ Ω,
    part implies:

    ω₁(x) := < ω(x), e₁> = a

    ω₂(x) := < ω(x), e₂> = b

    ω₃(x) := < ω(x), e₃> = c.

    Then xω(x) = ∑ᵢ³ ωᵢ(x) dxᵢ = ω₁(x)dx₁ + ω₂(x)dx₂ + ω₃(x)dx₃ = < ω(x), e₁> dx₁ + < ω(x), e₂> dx₂ + < ω(x), e₃> dx₃ = adx₁ + bdx₂ + cdx₃

    but that implies, by examining < ω(x), eᵢ >, that:

    ω(x) = a e₁ + b e₂ + c e₃. It has to be in order to get the inner product
    to come out the way it does. But this implies:

    ω(x) = a e₁ + b e₂ + c e₃ = adx₁ + bdx₂ + cdx₃.

    Yet this is crazy, how can a vector equal a scalar?

    I have a feeling you'll talk about the dual space but I just wont find it
    convincing if you just state that it's not really equality because they
    are in different spaces, the calculation doesn't support that conclusion.
    I'm clearly getting the result that a vector equals a scalar :( You know
    what I mean, I can't justify this step & if I try to rationalize it in some
    weird way then I am 100% sure there's a better explanation.

    Well look, if you read the differential forms section of this pdf you'll see
    we're almost there at least, albeit if it's still a bit different to the others.

    From all this the question arises, the dx & dy displacements you usually
    find in differential forms take place in Ω so if I move from 1 to 2 in the
    x-direction, 3 to 4 in the y direction & 5 to 6 in the z direction I will be
    mapping (2 - 1) as dx, (4 - 3) as dy & (6 - 5) as dz right? They will be
    what's subbed in as the components of x, i.e:
    ω(x) = adx₁ + bdx₂ + cdx₃ = a(2 - 1) + b(4 - 3) + c(6 - 5).

    I think that part is right so the only real issue here is whether the
    calculation is correct in most places, & the issue about the vecto-scalar...

    Note that anything higher than a 1-form has not even been touched from
    this perspective yet :eek::P











    As you can see the issues here arise from the idea of mapping to a
    map, mapping to a linear functional, mapping to a linear functional in
    the dual space.... A linear functional is just a linear map f : V → F.
    The dual space of V is the vector space L(V,F) = (V)*, i.e. the space
    of linear maps from V to F. But T : V
    L(V,F), i.e. T : V (V)*, isn't
    so clear. In fact, here's a problem:

    If (
    ℝⁿ)* is the dual space to ℝⁿ, with x ∈ ℝⁿ,
    define φx ∈ (ℝⁿ)* by φx(y) = <x,y>,
    define T : ℝⁿ → (ℝⁿ)* by T(x) = φx.

    To put it into my notation of T : ℝⁿ → (ℝⁿ)* | x ↦ T(x) = φx,
    I mean this is clearly what I was talking about before, a composition
    of mappings. Now that T has stopped at φx we have a new mapping
    φx : ℝⁿ → ℝ | y ↦ φx(y) = <x,y>, so to spell it all out:

    T : ℝⁿ → (ℝⁿ)* | x ↦ T(x) = φx : ℝⁿ → ℝ | y ↦ φx(y) = <x,y>.

    I'm going to say that this is very weird. For example, does this:

    T : ℝⁿ → (ℝⁿ)* | (λx + y) ↦ T(λx + y) = λφx + φy

    then

    λφx : ℝⁿ → ℝ | z ↦ λφx(z) = λ<x,y>
    φy : ℝⁿ → ℝ | z ↦ φy(z) = <x,z>

    make sense? Clearly here you've got this big chain of map to the value
    of the map which itself is a map to the real numbers & I point this out
    because I see no reason to think it's wrong, even though the notation
    used above, the xω(x) = ∑ᵢ³ ωᵢ(x) dxᵢ stuff, doesn't do it this way.





    So I really never had the right foundations to start asking about 2-forms
    or integrating forms but despite the absolute mess that's taken place
    above, with reference to deficient books, deficient pdfs etc... Still,
    I think we can crack this :cool:


  • Registered Users, Registered Users 2 Posts: 3,038 ✭✭✭sponsoredwalk


    Success! :D Joie de vivre, seize the day!

    A linear functional on a vector space is a linear map f : ℝⁿ.
    The dual space of the vector space
    ℝⁿ is the vector space (ℝⁿ)* = L(ℝⁿ,) ,
    i.e. the space of linear maps from
    ℝⁿ to .
    L( ℝⁿ, ℝ) = (ℝⁿ)* = { f : ℝⁿ → ℝ | [f(x + y) = f(x) + f(y)] ⋀ [ f(λx) = λf(x) ]}.

    We define the linear transformation
    ω from the vector space ℝⁿ to it's
    dual space (ℝⁿ)* by:

    ω : ℝⁿ → (ℝⁿ)* | x ↦ ω(x)

    Now ω(x) is itself a linear functional in the dual space which is defined by:

    ω(x) : ℝⁿ → ℝ | yω(x)(y) = < ω(x), y>.

    This is basically the same thing as T : ℝⁿ → (ℝⁿ)* | x ↦ T(x) = φx,
    which then implies φx : ℝⁿ → ℝ | y ↦ φx(y) = <x,y>.

    So some standard expression for a differential form ω = Pdx + Qdy + Rdz
    could be translated into:

    ω : ℝ³ → (ℝ³)* | (x,y,z) ↦ ω(x,y,z) = (P(x,y,z),Q(x,y,z),R(x,y,z))

    where:

    ω(x,y,z) : ℝ³ → ℝ | (dx,dy,dz) ↦ ω(x,y,z)(dx,dy,dz) = <(P(x,y,z),Q(x,y,z),R(x,y,z)),(dx,dy,dz)> = P(x,y,z)dx+ Q(x,y,z)dy + R(x,y,z)dz,

    or so it would seem from page 609 of this link & pages 3 to 5 of this pdf.
    If you read page 609 of that link you'll see he admits there is some
    (at times convenient) laziness in notation.

    There is an example given from one of the .pdfs above:
    For example if ω = y dx + z dy − Πdz and v = (−1, 0, 2), then
    ω
    (v) = y dx(v) + z dy(v) − Π dz(v) = −y − 2Π.
    So
    ω : ℝ³ → (ℝ³)* | (x,y,z) ↦ ω(x,y,z) = (y,z,− Π)

    where ω(x,y,z) : ℝ³ → ℝ | (−1, 0, 2) ↦ ω(x,y,z)(−1, 0, 2) = <(y,z,− Π),(−1, 0, 2)> = −y − 2Π :D

    Thinking about the notation Bressoud was using above with the integrals
    makes sense if you remember that you can characterize the inner product
    of two functions via an integral.


Advertisement