Talk:Sobolev space
This is the talk page for discussing improvements to the Sobolev space article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This article is rated B-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
Lp versus Lp
[edit]Mat, while I agree that normally the notation would be , the notation is also acceptable. In the case of the article about Sobolev spaces, we must adapt the less standard notation due to consistancy issues: using and in the same article is confusing. I don't have time to revert your change right now (and I want to make more massive changes to this article anyway), but I will at some point, unless you convince me otherwise.
Placement of p
[edit]The notation I use is not generally Wpk but usually Wk,p. I wrote large chunk with the W_p^k notation because the initial stub used that notation, but I must say I prefer to have both p and k in superscript. Loisel 06:08, 19 Aug 2004 (UTC)
If nobody minds, we can switch to this notation. It will also save us many latex formulas, which don't really look very good due to font size problems. Gadykozma 06:25, 19 Aug 2004 (UTC)
Folks, I am strongly in favor of the Wk,p notation, since it avoids mixing up with , sometimes even Wk,p0, the spaces with zero traces. -- 84.177.140.127 00:43, 1 November 2005 (UTC)
Unit circle example
[edit]Was it really necessary to remove the unit circle example?
Charles Matthews 20:00, 18 Aug 2004 (UTC)
- Well, em, no, of course, but since the whole article is about the line, I thought it would be more clear if the examples were on the line too. So I changed sum to integral, basically. Or did I miss some subtlety? Gadykozma 22:04, 18 Aug 2004 (UTC)
I just think it's harder to think about Fourier transforms and integrals. I'm not a 'professional' when it comes to analysis - my view might be shared by others.
Charles Matthews 08:34, 19 Aug 2004 (UTC)
- I am actually a "circle person" myself (out of perhaps 8 papers I have in analysis, only one is on the line). Maybe it's worth to add a paragraph there about Sobolev spaces on the circle? What do you think? Should such a paragraph appear before or after the discussion of R? Gadykozma 13:38, 19 Aug 2004 (UTC)
I cannot recommend writing a definition by example. Actually, what do we gain? Integer order Sobolev spaces can be defined without reference to Fourier transform or series at all. --- 84.177.140.127 00:43, 1 November 2005 (UTC)
Proofreading
[edit]I'm checking the article. I was reading the "Examples" section, and it was defining but using Fourier series, which isn't quite right. I hadn't noticed that the previous example involved periodic functions, so I changed it to Fourier transforms. I was going to change it back, but I think this way it gives two different examples (periodic functions and functions of R) so it's better. Loisel 22:53, 6 Sep 2004 (UTC)
- 1) After Charles Matthews remark I did a little "survey" and realized that lots of non analysts prefer Fourier series to Fourier transform. So I started the technical text "We start by introducing Sobolev spaces in the simplest settings, the one-dimensional case on the unit circle." Give it a second thought.
- 2) I also noticed that you added another piece of text at the bottom. You quote two theorems: the second is extremely interesting, and I would love to have some intuition. The first seems formal: why is it interesting/important? Is it a prerequist of the second? I assume "half integer" includes the integers, right? Finally, wouldn't it be better as a subsection of the "Extension operator" section than having the traces section in the middle?
- 3) Oh, and what is the "interpolation inequality" that "still holds" in the "extension operators" section?
- 4) One last question, while I'm in the mood: do complex interpolation and differentiation of fractional order really give the same spaces even for p different from 2? Gadykozma 03:33, 7 Sep 2004 (UTC)
I numbered your paragraphs for easy reference.
1) Okay, you can change it to Fourier series.
2) The first theorem isn't necessarely obvious. Once you have that the trace map is continuous, and the definition of H^s_0 as the closure of C^∞_c in H^s, you write an arbitrary element of H^s_0 as the limit of compactly supported functions. Continuity lets you switch limits, getting that the trace is zero. However the converse isn't necessarely easy to show, at least when s isn't an integer. You have to show that if Pu=0, then u is in H^s_0; in other words, it is the limit in H^s of functions in C_c. I'm in the middle of moving to Geneva and all my books are away and I don't remember how to prove it, but Lions & Magenes has that theorem, as well as the second one. Regarding the second theorem, I have no intuition; I never looked carefully enough at the proof (which is also in Lions & Magenes); I guess the fact that e is discontinuous when s is a half-integer is the curious bit. By half integer, I mean n+0.5, n an integer.
3) I thought I had written it, but it's
Here, L is a linear operator continuous from X+Y to A+B where {X,Y} and {A,B} are interpolation pairs, such that L:X→A and L:Y→B are continuous. C is independent of L and s. This inequality is crucial for proving, for instance, that the trace map is continuous on H^s, starting from the fact that it is continuous on the H^m spaces. It's also often a tight estimate of , which is otherwise hard to compute. I've tried computing the H^s([0,1]) norm of a function. First I found an extension operator, then I calculated the extension of my function, then I tried to compute its Fourier transform and lastly the H^s norm. It didn't work. But the interpolation inequality turned out to be good enough for me.
4) I think so, that is why complex interpolation is used to give the W^{s,p} spaces. To make sure, I'd check in Adams & Fournier, but it's in a box somewhere. There's also real interpolation, that's used for obtaining the trace spaces. The trace spaces are contained in W^{s-1/p,p} or something, but they are not all of W^{s-1/p,p}. To obtain the exact trace spaces, you need real interpolation. In the special case of H^s spaces, it just happens that the trace spaces are exactly H^s-1/2.
Sorry for switching the s and p again.
Loisel 10:04, 7 Sep 2004 (UTC)
- Ah... September. I just moved to Princeton myself and all my books are in boxes ;-) Would you like for me to do those various corrections and clarifications that we discussed now, or would you prefer to do them yourself? Gadykozma 13:57, 7 Sep 2004 (UTC)
- Oh, and two other things: why is there a constant in the interpolation inequality? At least in Riesz-Thorin this constant is 1. And is it possible to do complex interpolation over p, like in Riesz-Thorin, or only over k? Gadykozma 04:12, 8 Sep 2004 (UTC)
I'll let you do the changes. If I do it, I'll wait to get my books back first. I believe that the complex interpolation inequality has a constant, unlike the Riesz-Thorin theorem, but I'd double-check. Loisel 18:55, 10 Sep 2004 (UTC)
- OK, I'm done, your ball. Gadykozma 23:48, 16 Sep 2004 (UTC)
Fractional Calculus
[edit]Hi Sobolev guys. Could you take an interest in fractional calculus - see the talk page?
Charles Matthews 21:42, 24 Sep 2004 (UTC)
I just did, it looks right. I think the most general definition is the one involving spectral calculus (it applies to subdomains of R^n; the Fourier transform doesn't work then.) It would be good if it were expanded a bit with some examples (but I'm not that good at spectral calculus.) I'll think about it.
Loisel 17:04, 5 Oct 2004 (UTC)
Actually, there are a few typos, I'll fix them later.
Loisel 17:22, 5 Oct 2004 (UTC)
Traces then extensions
[edit]I've reorganized for increased logical structure. Traces are required to state some of the theorems of extension by zero. In a previous version, it was extension operators, then traces, then extension by zero. This made sense because extension operators can be used to define H^s (although we use complex interpolation first in this article.) The definition using extension operators is more tangible (at least to me) so it makes sense to do it first. Traces must precede extension by zero because some of the theorems about extension by zero require traces to be understood. (In particular, we need to have H^s_0, and one crucial theorem about H^s_0 is that it is the kernel of the trace operator.)
Someone moved extension by zero under extension operators, which makes sense (although I think extension by zero is a more advanced subject) but if we want to keep that organization, traces have to precede extension operators, which is how I made it now. The disadvantage is that the reader must wait some more before reading about the "more natural" definition of H^s involving the extension operators.
Loisel 17:22, 5 Oct 2004 (UTC)
- I was the one who put the "extension by zero" into the "extension" part. I missed the point that it depends on the trace part. If you want, you can return it to the original order (extension - trace - extension by zero). Actually, I probably prefer that order on the current.
- Thanks. Gadykozma 01:02, 6 Oct 2004 (UTC)
Text removed from introduction
[edit]This has been in the article a long time, so it occurred to me after I deleted it that I should copy it here. The problem is it is completely ahistorical. Sobolev spaces were invented to solve PDEs, and that is still their major application today. Of course things like stability and error estimates were and are important, but that is not the same thing as 'the butterfly effect'. Brian Tvedt 02:24, 11 January 2006 (UTC)
Many physical problems, such as weather prediction or microwave oven design, are modelled by partial differential equations. In such problems, there are some data (such as today's weather, or the shape and water distribution of the food in the microwave oven) and there is a prediction (such as tomorrow's weather, or the time required to cook the food in the microwave.) In some cases, it is difficult to do an accurate simulation. The butterfly effect makes it so that long term weather predictions are extremely difficult to make. Scientists need to be able to estimate the accuracy of their simulations. This can be turned into the a mathematical question of sorts:
- If the initial data and/or the model are slightly wrong, how wrong can my prediction be?
By turning to this question, mathematicians eventually gave precise descriptions of "slightly wrong data" and "wrong prediction". In so doing, it became apparent that the natural space of functions was inadequate. As mathematicians found what the meaning of "slightly wrong data" and "wrong prediction" ought to be, it became obvious that sometimes the "predictions" would not be . This required a careful investigation of the meaning of a differential equation, when the solution is not even differentiable.
The Sobolev spaces are the modern replacement for the space of solutions of partial differential equations. In these spaces, we can estimate the size of the butterfly effect or, if it cannot be estimated, we can often prove that the butterfly effect is too strong to be controlled.
It would be great to see some discussion of why S. spaces are important restored to this article. Bungo77 12:59, 12 March 2007 (UTC)
- Indeed, please see the discussion at Wikipedia talk:WikiProject Mathematics/Archive 23#Sobolev space. Jmath666 15:28, 22 March 2007 (UTC)
Relationship to Hilbert spaces
[edit]Hilbert space is mentioned in the "See also" part of the article. It is hinted at in the examples (). It would be good to come out and state explicitly is a Hilbert space (or whatever the correct relationship is.) Bungo77 13:04, 12 March 2007 (UTC)
- The space is Hilbert space with inner product
- (Igny 14:29, 12 March 2007 (UTC))
- I see, thanks. Perhaps the above could be added to the article. Bungo77 11:13, 15 March 2007 (UTC)
- I'd have thought that that's pretty important so I added it. -- Jitse Niesen (talk) 04:59, 18 March 2007 (UTC)
- I see, thanks. Perhaps the above could be added to the article. Bungo77 11:13, 15 March 2007 (UTC)
noninteger k
[edit]In the article we started with definition of for natural k. We never defined (for k>0) and never mentioned the dual relationship between and .(Igny 13:12, 19 March 2007 (UTC))
Rewrite proposed
[edit]Please see the discussion at Wikipedia talk:WikiProject Mathematics/Archive 23#Sobolev space. Shouldn't the discussion be moved here? Jmath666 01:32, 22 March 2007 (UTC)
Sobolev spaces on Riemannian manifolds
[edit]While Sobolev spaces on Riemannian manifolds are not defined, it is used in the embedding theorems section. Temur (talk) 07:51, 9 January 2008 (UTC)
- Good point. There are a few things to say about Sobolev spaces on Riemannian manifolds. If I have the time in the near future, I can start a new section. Silly rabbit (talk) 15:31, 10 February 2008 (UTC)
Besov spaces
[edit]It should be mentioned that there is another candidate for fractional Sobolev spaces, namely the special Besov spaces . The ones given in the article are sometimes called the Bessel potential spaces and I have seen more often to call the above mentioned special Besov spaces Sobolev spaces (sometimes Besov-Sobolev spaces) for noninteger s. These are motivated by the intrinsic Slobodeckij norm (or real interpolation) as opposed to Fourier transform (or complex interpolation). The spaces and the ones defined in the article coincide when p=2. I will wait your responses and try to add something along these lines. Temur (talk) 11:02, 10 February 2008 (UTC)
- I am familiar with the interpolative Sobolev spaces. I have never heard them called Besov spaces, but I am not an expert. For non-integer s, the article currently defines the Sobolev space either by the Fourier transform (which only works on special domains), or by employing a Hölder-type integral (which works in general, I believe). I agree that what you call Besov spaces are definitely deserving of treatment, and it's good to know that they are sometimes called "Besov spaces" so that we have something to distinguish them from the garden-variety Sobolev spaces. I think it's an excellent idea. Let me know if you need any assistance. Silly rabbit (talk) 15:22, 10 February 2008 (UTC)
- The older edition of Adams's Sobolev Spaces book talks about them (and calls them such), but I haven't seen the 21st century edition.
- The current fractional Sobolev spaces seems to only indicate its relevance to the circle? It generically works when the Fourier transform works, so might be moved around a little. Giving a general definition or two as well would be great, but I think the Fourier formulation is very clean (especially as presented in Folland's PDE text).
- I know the Sobolev embedding theorem relates the fractional Sobolev spaces (via Fourier) to Hoelder continuous spaces, and I think it would be very nice to see this directly with the Sobolev spaces defined by a Hoelder type integral (assuming you mean an integral similar to Hoelder continuity). JackSchmidt (talk) 16:55, 10 February 2008 (UTC)
(unindent) Two responses:
1. Actually, by Holder-type integral, I meant this whopper:
I wasn't sure what else to call it, and didn't want to paste it here. Anyway, this inner product can be used to define the fractional Sobolev spaces. The way the text is worded, however, this is only hinted at.
2. The Fourier series approach is clearly only relevant to the circle. It's weird that the author didn't do it for R as well, but if you scroll down the page, then the multidimensional section handles the case of fractional orders in Rn via the usual Fourier transform approach. Silly rabbit (talk) 17:11, 10 February 2008 (UTC)
- (Checked Adams) I was not aware that Adams discussed Besov spaces in his book. Their norm is given by the "whopper" integral above, and apparently they differ from Sobolev spaces only when s is a positive integer. Otherwise, they are identical to the Sobolev spaces defined by the interpolation method (which is how Adams gets them). Silly rabbit (talk) 17:18, 10 February 2008 (UTC)
- (1). Yup, that seems like Hoelder continuity to me (integral version, not p=oo). Looks good. In fact the text in the multi-dimensional section *already* says what I had wanted it to say about Hoelder continuity. I agree it only hints about the inner product when I would prefer it be bold. (2). Indeed it does do it in the multi-dimensional section, it just didn't have a nice big section header like in the circle case. I agree it is weird that Fourier series were used instead of integrals, but I hardly ever worked in one-dimension, so I might simply lack perspective (our program was all Folland style elliptic PDE and variants). (Adams). Just to make sure I understood your response, Adams worked out as a reference right? Not only have I not read the 21st century edition, I haven't read the old edition in the 21st century! I wouldn't be too surprised if my memory was flawed. If Adams didn't work out, I can go check on Monday. I wasn't terribly well-read back then, so it should be quick to find. JackSchmidt (talk) 18:01, 10 February 2008 (UTC)
- Yes, Adams is more than adequate. :) Silly rabbit (talk) 18:03, 10 February 2008 (UTC)
Proposal for structural change
[edit]I propose some changes to the general structure, so that it would look like the following:
- Introduction
- Sobolev spaces on real line (to change the circle to the real line, which is more intuitively accessible and traditional way of introducing Sobolev spaces)
- Multiple dimensions (existing section)
- Sobolev embedding (to move the existing section here, one has to limit to integer k, it can be explained without fuss of different "definition" of Sobolev spaces with fractional k)
- Sobolev spaces with non-integer k (to move, here we can explain the Bessel potential-complex interpolation approach and the Slobodeckij (or as you say Holder-like) norm-real interpolation approaches)
- Sobolev spaces on domains (New section: at this point it would be natural to introduce these spaces by restriction, and we can also mention intrinsic characterizations etc)
- Extension operators (Since we know about Sobolev spaces on domains, the question of extension will naturally arise)
- Sobolev spaces on manifolds (New section: before treating traces it is preferable to introduce Sobolev spaces on hypersurfaces, and as a generalization we can talk about Riemannian manifolds and even Lie groups)
- Traces (the existing section)
What do you guys think about this proposal? Temur (talk) 05:49, 12 February 2008 (UTC)
- I have started a sandbox at User:Silly_rabbit/Sobolev_space for restructuring the article. The only thing I have done so far is to move the unit circle business out to its own article User:Silly_rabbit/Sobolev_spaces_on_the_unit_circle. So, I guess I support the idea of restructuring. I agree with most of the points presented above. However, I don't agree that the one-variable theory deserves any special treatment here: any reader not mathematically sophisticated enough to deal with Rn has come to the wrong place. In fact, I support moving the unit circle material out to its own article since this is rather tangential to the main uses of Sobolev spaces (I called it "weird" above). I would also add:
- Besov spaces via the integral of the Holder-like quotients (see my comment above: this is what I meant when I said "Holder-like integral", not the complex interpolation).
- Cheers, Silly rabbit (talk) 14:55, 12 February 2008 (UTC)
- I disliked the current structure myself (with no clear reason). Please take a look at this rewrite attempt to see if there is anything useful there. (Igny (talk) 14:57, 12 February 2008 (UTC))
- I visited the rewrite at User:Silly_rabbit/Sobolev_space. I love it. Silly rabbit: I suggest that you publish it! TomyDuby (talk) 19:36, 25 July 2008 (UTC)
A nice idea to merge the one- and multidimensional sections. I completely agree. In fact I was hesitating on this because the way the original article is written (introducing first 1D case and so on). I don't know if the Besov space section should be separate or as a subsection of "non-integer k" section. Actually these are not exactly general Besov spaces, they are special in that p=q. These are also called Slobodeckij spaces and introduced to fill the gap between integer order Sobolev spaces. These are even very often called Sobolev spaces (they are different for p!=2 from the Sobolev spaces via the Bessel potential). Let us together make this article better! Temur (talk) 23:43, 12 February 2008 (UTC)
I'm the original author of much of this article, and I don't mind an improved rewrite. One thing that I would like to see, if we can manage it, is some sort of way of explaining many of the ideas to a layperson. It's not easy, but it should be attempted. Loisel (talk) 04:31, 13 February 2008 (UTC)
- It's a nice article. Sorry if we seem very critical of it. Anyway, when I can get a solid block of time I'll float some edits in my sandbox. It won't be for awhile. Silly rabbit (talk) 04:20, 19 February 2008 (UTC)
- No, no, go ahead, rewrites are good. Loisel (talk) 18:17, 19 February 2008 (UTC)
ACL characterization of Sobolev functions
[edit]What is ACL?
TomyDuby (talk) 17:18, 18 October 2008 (UTC)
Absolutely Continuous on Lines Thenub314 (talk) 18:08, 18 October 2008
- Thanks!
- TomyDuby (talk) 20:15, 18 October 2008 (UTC) (UTC)
I don't understand how...
[edit]I don't understand how:
- a function can be in but not continuous or even bounded (section before ACL; I agree with that)
- any function in is absolutely continuous on every line parallel to axes (ACL section; I might believe in almost every line).
Bdmy (talk) 20:32, 27 December 2008 (UTC)
- Yes, it should be almost every line. Moreover, it may also be necessary to modify the function on a set of measure zero first. siℓℓy rabbit (talk) 21:48, 27 December 2008 (UTC)
Wrong formula?
[edit]Section "Sobolev embedding": "(For p = ∞ the Sobolev space is defined to be the Hölder space Cn,α where k = n + α and 0 < α ≤ 1.)" A line before n is the dimension. But here probably n is not at all the dimension. --Boris Tsirelson (talk) 20:58, 10 January 2011 (UTC)
Also, what happens for negative k? The article on Holder spaces requires n≥0 (there denoted by k, not n). --Boris Tsirelson (talk) 21:04, 10 January 2011 (UTC)
- I really miss the part about negative ks. It seems like it has been there before. --93.104.36.169 (talk) 01:01, 21 September 2011 (UTC)
Changes
[edit]We made some changes to the article. We are a team of people who have held and attended lectures on function spaces. We wanted to realize some of the suggestions in the discussion here. Here is a list of what we changed:
- the article has a new structure, roughly following the proposal for structural change above: no special treatment for unit ball/real line, double results (e.g. trace and extension sections) are eliminated, ...
- we moved the focus away from the p=2 case. Everything on Sobolev spaces (traces, interpolation, fractional order, approximation,….) presented in the original article works in the same way for 1 ≤ p ≤ ∞ or 1 < p < ∞ and does not depend on the Hilbert space structure. It is still mentioned that some things are special for p = 2.
- we changed/eliminated the part on interpolation. It is treated (link now exists) in a different article. We still state the connection. Several results in the original article were imprecise and incorrect (e.g. reiteration theorem for complex interpolation)
- we changed the sections on fractional order spaces, following the suggestions above. There were a few substantial mistakes and insufficient literature before. We hope that the general ideas behind these spaces become clear, but we would also appreciate more text and explanations for the layperson.
For the future, we would like to include negative order spaces and duality, as in the rewrite by Igny. Please comment and correct our version. Thank you! — Preceding unsigned comment added by FunctionspaceInvader (talk • contribs) 12:46, 17 March 2011 (UTC)
- I think you are doing a good job: the emphasis on Sobolev spaces on the unit circle was not useful as a simplification, since it has not the wealth of applications the multi-dimensional theory has. Also, on the unit circle, other more general function spaces are well understood and manageable, therefore rewriting the entry for the more traditional multi-dimensional point of view deserves a praise. Daniele.tampieri (talk) 19:18, 3 April 2011 (UTC)
Theorem on approximation by smooth functions
[edit]Right now, there is a theorem cited from Adams, 1975 that says we have approximation by smooth functions up to the boundary if the boundary is Lipschitz continuous. However, Theorem 3 from section 5.3.3 of Evans Partial Differential Equation requires the boundary to be C1. Any know if the Adams result is correct? — Preceding unsigned comment added by 160.39.183.149 (talk) 01:29, 18 February 2013 (UTC)
Bessel potential spaces
[edit]The text of this section of the article seems to allow negative s in the definition of , but in this case the elements are no longer functions in , contrary to the definition displayed after mentioning real s. The corresponding definition in Bergh and Löfström, p. 141, replaces by f being a tempered distribution. This must be clarified. Bdmy (talk) 21:46, 4 July 2013 (UTC)
Continuity question
[edit]A rather minor point, but does a function in W1,p(ℝ) have to be continuous? I put in an example of a weak derivative, and purposely threw in a single point which makes a discontinuity there. But I see that User:Gadykozma wrote (back in 2004) that functions in W1,1(ℝ) are absolutely continuous. My discontinuity doesn't affect the L1 norm or the weak derivative. A more general question is whether the functions in a Sobolev space have to be defined everywhere or only almost everywhere. In other words, is the space actually a quotient space over "equivalence almost everywhere", like Lp space? Eric Kvaalen (talk) 11:16, 12 February 2014 (UTC)
- As far as I understand, a function in a Sobolev space is initially a special element of Lp, therefore, an equivalence class. (Therefore, no values at points.) For some values of the parameters (k,p,n) an embedding theorem guarantees existence of a continuous function within such equivalence class; then, naturally, other elements of this equivalence class are of no interest. Accordingly, it is usual to say that "functions in W1,1(ℝ) are absolutely continuous" etc. And your "u(x)=10 if x=0" is of little relevance to Sobolev spaces; no one bothers about u(0) when defining an equivalence class. Boris Tsirelson (talk) 11:33, 12 February 2014 (UTC)
- Thanks. Yes, I know that my "u(x)=10 if x=0" is of little relevance to Sobolev spaces. I just thought I'd throw that in to make the point that it doesn't matter! I think I will put in a little qualifier in the sentence about functions in W1,1(ℝ) being absolutely continuous. Eric Kvaalen (talk) 18:01, 12 February 2014 (UTC)
smooth up to the boundary?
[edit]In section "Approximation by smooth functions" I see the notion "smooth up to the boundary"; I wonder, what is its definition? Boris Tsirelson (talk) 09:52, 27 January 2015 (UTC)
- It means that the function and all derivatives extend continuously to the closure. Sławomir Biały (talk) 13:16, 14 March 2015 (UTC)
- Thank you. It would be nice to write so in the article (or make a link). Boris Tsirelson (talk) 07:04, 15 March 2015 (UTC)
- But I also bother about a non-regular open set (for instance, an open ball minus a closed disk). In this article (section), the domain has Lipschitz boundary, therefore regular. Is the notion "smooth up to the boundary" defined for such domains only? Boris Tsirelson (talk) 07:20, 15 March 2015 (UTC)
- I think in this setting, Lipschitz means that in a neighborhood of every point of the boundary, the open set is the undergraph of a Lipschitz function. This meaning of "Lipschitz boundary" is perhaps somewhat specific to this area of analysis and PDEs. It rules out pathologies like the one you are describing. Sławomir Biały (talk) 21:25, 15 March 2015 (UTC)
- Yes. This is why I wrote "the domain has Lipschitz boundary, therefore regular". But I am not sure that my example is a pathology. Rather, different limits on the two sides of the disk could be allowed. Maybe this is known as "slit domain" in complex analysis. In fact, faced such situation, I hoped to find an appropriate definition, but was unhappy, not finding it. Well, for Wikipedia, my question is, what is the widely accepted definition (if any), including the indication of the class of domains on which the definition applies. Boris Tsirelson (talk) 21:46, 15 March 2015 (UTC)
- Sorry, I parsed what you wrote incorrectly. There are cone conditions (Wikipedia article missing) that allow domains like the kind you describe, with the interior lying on two sides of some closed subset. Sławomir Biały (talk) 22:22, 15 March 2015 (UTC)
- Interesting. But, has the notion "smooth up to the boundary" a counterpart for such domains? Boris Tsirelson (talk) 06:09, 16 March 2015 (UTC)
- Something like smooth functions in Ω whose derivatives are bounded of all orders is a notion that is available with no conditions on Ω. (This is the intersection of the Sobolev spaces for all k.) Sławomir Biały (talk) 12:35, 16 March 2015 (UTC)
- Wow! Looks to be a clever idea. I am afraid, brand new... Boris Tsirelson (talk) 17:52, 16 March 2015 (UTC)
- Something like smooth functions in Ω whose derivatives are bounded of all orders is a notion that is available with no conditions on Ω. (This is the intersection of the Sobolev spaces for all k.) Sławomir Biały (talk) 12:35, 16 March 2015 (UTC)
- Interesting. But, has the notion "smooth up to the boundary" a counterpart for such domains? Boris Tsirelson (talk) 06:09, 16 March 2015 (UTC)
- Sorry, I parsed what you wrote incorrectly. There are cone conditions (Wikipedia article missing) that allow domains like the kind you describe, with the interior lying on two sides of some closed subset. Sławomir Biały (talk) 22:22, 15 March 2015 (UTC)
- Yes. This is why I wrote "the domain has Lipschitz boundary, therefore regular". But I am not sure that my example is a pathology. Rather, different limits on the two sides of the disk could be allowed. Maybe this is known as "slit domain" in complex analysis. In fact, faced such situation, I hoped to find an appropriate definition, but was unhappy, not finding it. Well, for Wikipedia, my question is, what is the widely accepted definition (if any), including the indication of the class of domains on which the definition applies. Boris Tsirelson (talk) 21:46, 15 March 2015 (UTC)
- I think in this setting, Lipschitz means that in a neighborhood of every point of the boundary, the open set is the undergraph of a Lipschitz function. This meaning of "Lipschitz boundary" is perhaps somewhat specific to this area of analysis and PDEs. It rules out pathologies like the one you are describing. Sławomir Biały (talk) 21:25, 15 March 2015 (UTC)