-----------------------------------------------------------------------------------
Post ID:17176
Sender:Juergen Brendel <juergen.brendel@...>
Post Date/Time:2011-01-03 20:42:50
Subject:Re: [rest-discuss] Combining HTML and XML?
Message:
Hello! On Wed, 2010-12-29 at 19:26 -0700, Eric J. Bowman wrote: > I think most folks are still making this too hard on themselves, and > others. I'm disappointed by how often what appears to be hypertext- > driven, really depends on magical out-of-band processing rules being > switched on when encountering nonstandardized strings in Content-Type > or @rel, which isn't actually what hypertext as the engine of state > means in the context of REST's uniform (standardized) interface. Yes, I did notice that: You try to design HATEOAS into your API and before you know it you start to invent your own @rel tags, since that seems to be so easy and expressive. I understand your concern about this. I have to admit that I have done the custom-rel thing myself, since I found the existing lists of available definitions unsatisfactory. Clearly, I'm doing something wrong? I guess my first question would be: Where can I find the definitive list of defined 'rel' values? I found different ones, which don't all seem to agree on what's really defined. Or is the 'rel' value something defined in the media type definition? Juergen -- Juergen Brendel MuleSoft
On Mon, Jan 3, 2011 at 3:42 PM, Juergen Brendel < juergen.brendel@...> wrote: > > I guess my first question would be: Where can I find the definitive list > of defined 'rel' values? I use http://wiki.whatwg.org/wiki/RelExtensions as a reference. I agree with you - they are not very expressive of what I often want; they are about generic maneuvering through collections, or apparently, through social networks. -Randy Fischer
I currently use @rels for application-level semantics. IOW, they don't indicate any protocol details (HTTP methods, content-types, etc.) I currently implement this app-level support using @rel in two different ways. *** app-level semantics are "native" to a custom media-type. In this case, the rel values are "baked" into the media type support (e.g. HTML does this w/ rel="stylesheet"). I design a very narrow media type that is targeted for a collection of related work and use @rel to hold app-level details that can be used by client apps to perform their own processing. This works well for my current round of "m2m-style" clients. *** app-level semantics are "adjunct" to a generic media type. In this case, the rel values are documented "out-of-band" (e.g. within the generic media type[1], IANA Link Relations[2], RelExtensions[3], ParamsRUs [4], or my own implementation-specific documentation). Then code-on-demand is sent to the client to help the client "sort out" the meaning and use of these real values. This is done a a case-by-case basis; offers the most flexibility, but is non-standard for each implementation (including supporting more than one client type (browser, command-line, desktop, etc.). [1] http://www.w3.org/TR/html4/types.html#type-links [2] http://www.iana.org/assignments/link-relations/link-relations.xhtml [3] http://wiki.whatwg.org/wiki/RelExtensions [4] http://paramsr.us/link-relation-types/ mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Mon, Jan 3, 2011 at 16:11, Randy Fischer <fischer@...> wrote: > > > > > On Mon, Jan 3, 2011 at 3:42 PM, Juergen Brendel < > juergen.brendel@...> wrote: > >> >> I guess my first question would be: Where can I find the definitive list >> of defined 'rel' values? > > > > I use http://wiki.whatwg.org/wiki/RelExtensions > > as a reference. I agree with you - they are not very > expressive of what I often want; they are about generic > maneuvering through collections, or apparently, through > social networks. > > -Randy Fischer > > > >
Hello!
On Mon, 2011-01-03 at 16:27 -0500, mike amundsen wrote:
> I currently use @rels for application-level semantics. IOW, they don't
> indicate any protocol details (HTTP methods, content-types, etc.)
Right, that was actually one of my other questions: Often it seems that
it would be nice to indicate in the link itself what media type and HTTP
method is supported. Something like:
"links" : [ {
"href" : "/foo/bar",
"rel" : "self",
"method" : "GET"
},
{
"href" : "/foo/bar",
"rel" : "edit",
"method" : "PUT",
"media-type" : "application/something"
}
]
Or something thereabouts (this was JSON-like, but that shouldn't
distract). I admit that if your media type is specified/standardized
somewhere then most of this information isn't required right here at
this point, since the method and media type (for POST/PUT) could be
defined there. On the other hand, I would actually love to see this sort
of information in the control links of a RESTful API, since it means I
don't have to jump back and forth between type spec and the stuff
presented to me in the API, instead I can just look right here and see
(almost) everything I need.
It seems an API that you can just look at without having to read much of
a spec is more user friendly and - all else being equal - should see
better/easier adoption.
It also allows you to evolve your API independently of the exact media
type definition, since as you are adding more control links over time,
they can just be presented, carrying all the information you need to
enjoy them.
For example, the 'image' property of a personal profile or bio could
advertise (in the link description) the image media types it can accept.
More can be added over time, without needing to update the media type
description.
>
> I currently implement this app-level support using @rel in two
> different ways.
> *** app-level semantics are "native" to a custom media-type.
> In this case, the rel values are "baked" into the media type support
> (e.g. HTML does this w/ rel="stylesheet"). I design a very narrow
> media type that is targeted for a collection of related work and use
> @rel to hold app-level details that can be used by client apps to
> perform their own processing. This works well for my current round of
> "m2m-style" clients.
Hm. But then you design application specific media types, which we are
supposed to avoid now, don't you?
> *** app-level semantics are "adjunct" to a generic media type.
> In this case, the rel values are documented "out-of-band" (e.g. within
> the generic media type[1], IANA Link Relations[2], RelExtensions[3],
> ParamsRUs [4], or my own implementation-specific documentation). Then
> code-on-demand is sent to the client to help the client "sort out" the
> meaning and use of these real values. This is done a a case-by-case
> basis; offers the most flexibility, but is non-standard for each
> implementation (including supporting more than one client type
> (browser, command-line, desktop, etc.).
>
I also think it's useful to design a very generic type (such as XML with
links or JSON with links) along with a decent set of generally useful
rel values. Once that's done, this might be enough for some applications
already, while other apps can define their custom media types merely by
defining additional 'rel' values for that generic type, rather than a
brand new type from scratch? Maybe that's what this whole discussion
about micro-types is about.
Over time, I can imagine that a set of 'non generic' rel values would
emerge as generally useful and could thus be added to the media type
spec which as a result could evolve.
Juergen
--
Juergen Brendel
MuleSoft
On Mon, Jan 3, 2011 at 2:27 PM, mike amundsen <mamund@...> wrote: > > *** app-level semantics are "native" to a custom media-type. > In this case, the rel values are "baked" into the media type support (e.g. > HTML does this w/ rel="stylesheet"). I design a very narrow media type that > is targeted for a collection of related work and use @rel to hold app-level > details that can be used by client apps to perform their own processing. > This works well for my current round of "m2m-style" clients. > > *** app-level semantics are "adjunct" to a generic media type. > In this case, the rel values are documented "out-of-band" (e.g. within the > generic media type[1], IANA Link Relations[2], RelExtensions[3], ParamsRUs > [4], or my own implementation-specific documentation). Then code-on-demand > is sent to the client to help the client "sort out" the meaning and use of > these real values. This is done a a case-by-case basis; offers the most > flexibility, but is non-standard for each implementation (including > supporting more than one client type (browser, command-line, desktop, etc.). > I think Mike brings up the issue that concerns me the most with using `application/html` with custom @rels and rdfa. Those approaches, require the significant amounts of out-of-band knowledge to be useful in many, if not most, m2m scenarios. In read-write scenarios the issues are especially obvious. For example, there is currently no way to annotate forms and form inputs in a meaningful way. An automata that needs to perform posts or puts ends up relying almost entirely on out-of-band knowledge. The same problem exists for links when using the @rel approach. Once you get beyond the very limit scope of publicly defined relation types (and arguably even before then) you are back in out-of-band knowledge territory. Automata need a good deal more hand holding than humans do in order achieve their goals. A generic media type that provides the required capabilities does not seem to exist. All the generic media type approaches i have seen rely heavily on out-of-band knowledge. A generic media type for restful m2m scenarios might be possible but it does not seems to exist today. (Pointers welcome if i am wrong.) Today, custom media types are the best bet for providing robust, evolvable, scalable and easy to understand (and implement) http interfaces for m2m style interactions (even on the public internet). Peter barelyenough.org
Juergen:
<snip>
> "links" : [ {
> "href" : "/foo/bar",
> "rel" : "self",
> "method" : "GET"
> },
> {
> "href" : "/foo/bar",
> "rel" : "edit",
> "method" : "PUT",
> "media-type" : "application/something"
> }
> ]
</snip>
This is an example of what I avoid. I do not encode protocol details
(method/media type) in my media types themselves or in the
representations made w/ these types. I know that HTML does this w/
FORM[1] elements (@method & @enc-type). There are times when I may do
this, but it is rarely needed for the work I am doing. One reason I
keep protocol details out of the [representation|media types] is that
I favor media types that are "protocol-agnostic." IOW, it should be
possible to use HTTP, FTP, XMPP, etc. with the same media type and
accomplish the same tasks.
As for the value of including a media-type string for a link within
the representation (this has been discussed here several times, I
can't find any links right now), I find this practice needlessly
"locks" the client into expecting the same representation format for a
link. I note that HTML currently has @type[2] as a way to give clients
a _hint_ on what media type might be at the other end of the link.
<snip>
> Hm. But then you design application specific media types, which we are
> supposed to avoid now, don't you?
</snip>
I've never bought into the notion that app-specific media types are
bad. That is my opinion and nothing more. I can only say that, for the
use cases I have encountered, [specific|custom] media types work quite
well and I have no immediate plans to stop [using|creating] them as I
find the need.
<snip>
> I also think it's useful to design a very generic type (such as XML with
> links or JSON with links) along with a decent set of generally useful
> rel values. Once that's done, this might be enough for some applications
> already, while other apps can define their custom media types merely by
> defining additional 'rel' values for that generic type, rather than a
> brand new type from scratch? Maybe that's what this whole discussion
> about micro-types is about.
</snip>
In my current work, I use XHTML (parsed as XML for some clients) to
handle the protocol details (H Factors[3] is what I call these) and
@rel values for app-level details. I've been experimenting w/ encoding
the details of @rels in a secondary document (using the HTML
@profile[3] pattern) and engineering clients to "consume" these
profile documents, parse them, and "apply" the results to the generic
media type. This is a mimic of "CSS" but for app-level semantics. I've
built some trivial clients that are capable of applying different
app-level profiles to the same generic media type and accomplishing
the desired tasks.
I've only dabbled in this space and have nothing meaning to show right
now. I am hopeful that I'll make progress on this in 2011.
<snip>
> Over time, I can imagine that a set of 'non generic' rel values would
> emerge as generally useful and could thus be added to the media type
> spec which as a result could evolve.
</snip>
The Web Linking RFC[4] has this as one of it's aims (results?) and the
ParamsRUs site i referenced earlier is part of that effort.
[1] http://www.w3.org/TR/html401/interact/forms.html#h-17.3
[2] http://www.w3.org/TR/html4/struct/links.html#adef-type-A
[3] http://gmpg.org/xmdp/
[4] http://tools.ietf.org/html/rfc5988
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Mon, Jan 3, 2011 at 18:18, Juergen Brendel
<juergen.brendel@...> wrote:
>
> Hello!
>
> On Mon, 2011-01-03 at 16:27 -0500, mike amundsen wrote:
>> I currently use @rels for application-level semantics. IOW, they don't
>> indicate any protocol details (HTTP methods, content-types, etc.)
>
> Right, that was actually one of my other questions: Often it seems that
> it would be nice to indicate in the link itself what media type and HTTP
> method is supported. Something like:
>
> "links" : [ {
> "href" : "/foo/bar",
> "rel" : "self",
> "method" : "GET"
> },
> {
> "href" : "/foo/bar",
> "rel" : "edit",
> "method" : "PUT",
> "media-type" : "application/something"
> }
> ]
>
> Or something thereabouts (this was JSON-like, but that shouldn't
> distract). I admit that if your media type is specified/standardized
> somewhere then most of this information isn't required right here at
> this point, since the method and media type (for POST/PUT) could be
> defined there. On the other hand, I would actually love to see this sort
> of information in the control links of a RESTful API, since it means I
> don't have to jump back and forth between type spec and the stuff
> presented to me in the API, instead I can just look right here and see
> (almost) everything I need.
>
> It seems an API that you can just look at without having to read much of
> a spec is more user friendly and - all else being equal - should see
> better/easier adoption.
>
> It also allows you to evolve your API independently of the exact media
> type definition, since as you are adding more control links over time,
> they can just be presented, carrying all the information you need to
> enjoy them.
>
> For example, the 'image' property of a personal profile or bio could
> advertise (in the link description) the image media types it can accept.
> More can be added over time, without needing to update the media type
> description.
>
>>
>> I currently implement this app-level support using @rel in two
>> different ways.
>
>> *** app-level semantics are "native" to a custom media-type.
>> In this case, the rel values are "baked" into the media type support
>> (e.g. HTML does this w/ rel="stylesheet"). I design a very narrow
>> media type that is targeted for a collection of related work and use
>> @rel to hold app-level details that can be used by client apps to
>> perform their own processing. This works well for my current round of
>> "m2m-style" clients.
>
> Hm. But then you design application specific media types, which we are
> supposed to avoid now, don't you?
>
>
>> *** app-level semantics are "adjunct" to a generic media type.
>> In this case, the rel values are documented "out-of-band" (e.g. within
>> the generic media type[1], IANA Link Relations[2], RelExtensions[3],
>> ParamsRUs [4], or my own implementation-specific documentation). Then
>> code-on-demand is sent to the client to help the client "sort out" the
>> meaning and use of these real values. This is done a a case-by-case
>> basis; offers the most flexibility, but is non-standard for each
>> implementation (including supporting more than one client type
>> (browser, command-line, desktop, etc.).
>>
> I also think it's useful to design a very generic type (such as XML with
> links or JSON with links) along with a decent set of generally useful
> rel values. Once that's done, this might be enough for some applications
> already, while other apps can define their custom media types merely by
> defining additional 'rel' values for that generic type, rather than a
> brand new type from scratch? Maybe that's what this whole discussion
> about micro-types is about.
>
> Over time, I can imagine that a set of 'non generic' rel values would
> emerge as generally useful and could thus be added to the media type
> spec which as a result could evolve.
>
> Juergen
>
>
>
> --
> Juergen Brendel
> MuleSoft
>
>
Hello!
On Mon, 2011-01-03 at 19:50 -0500, mike amundsen wrote:
> <snip>
> > "links" : [ {
> > "href" : "/foo/bar",
> > "rel" : "self",
> > "method" : "GET"
> > },
> > {
> > "href" : "/foo/bar",
> > "rel" : "edit",
> > "method" : "PUT",
> > "media-type" : "application/something"
> > }
> > ]
> </snip>
> This is an example of what I avoid. I do not encode protocol details
> (method/media type) in my media types themselves or in the
> representations made w/ these types. I know that HTML does this w/
> FORM[1] elements (@method & @enc-type). There are times when I may do
> this, but it is rarely needed for the work I am doing. One reason I
> keep protocol details out of the [representation|media types] is that
> I favor media types that are "protocol-agnostic." IOW, it should be
> possible to use HTTP, FTP, XMPP, etc. with the same media type and
> accomplish the same tasks.
Really? I mean, is that realistic for a lot of cases? If you really need
to support multiple protocols then sure, you don't want that sort of
thing here and you want an agnostic type. But I'm just trying to
remember what the equivalent of PUT vs. POST would be for FTP, or the
'location' HTTP response header, just as an example. Actually, it's not
so much about 'PUT' vs. 'POST', but about update-existing (or
create-at-location) vs.
create-in-collection-and-let-server-determine-location.
It's nice if it all works with a multitude of protocols, but aren't we
trying to take advantage of the built-in capabilities of a particular
protocol, in this case the most commonly used protocol on the Internet
(HTTP) in our quest for scalability and ease of use? And 'in the real
world' (I'm actually mostly thinking about public APIs, so admittedly
it's more 'my real world'), I would have thought that non-HTTP is not
much of a concern in this context. Not saying YOU should only concern
yourself with HTTP, clearly you'll have your reasons why you need those
other ones.
Nevertheless, if you take 'method' out then you could just say that
"rel=self" implies GET (or equivalent), while "rel=edit" implies PUT (or
equivalent). If that's defined out of band in the media type definition
then I guess you really don't need it here.
> As for the value of including a media-type string for a link within
> the representation (this has been discussed here several times, I
> can't find any links right now), I find this practice needlessly
> "locks" the client into expecting the same representation format for a
> link. I note that HTML currently has @type[2] as a way to give clients
> a _hint_ on what media type might be at the other end of the link.
Not sure I understand. Are you talking about a GET request having its
media type apparently implied based on the indicated content type for a
PUT request? If a client would see it that way, then they have certainly
taken it too far, since the type was specific only to the 'edit' link.
The 'self' link could possibly have its own media type indication.
Realistically, it would be a list of types in each case, because of
conneg.
> I've never bought into the notion that app-specific media types are
> bad. That is my opinion and nothing more. I can only say that, for the
> use cases I have encountered, [specific|custom] media types work quite
> well and I have no immediate plans to stop [using|creating] them as I
> find the need.
Yes, and I would agree with you. It just seems that there are very
strong opinions on both sides of this issue.
> <snip>
> > I also think it's useful to design a very generic type (such as XML with
> > links or JSON with links) along with a decent set of generally useful
> > rel values. Once that's done, this might be enough for some applications
> > already, while other apps can define their custom media types merely by
> > defining additional 'rel' values for that generic type, rather than a
> > brand new type from scratch? Maybe that's what this whole discussion
> > about micro-types is about.
> </snip>
> In my current work, I use XHTML (parsed as XML for some clients) to
> handle the protocol details (H Factors[3] is what I call these) and
> @rel values for app-level details. I've been experimenting w/ encoding
> the details of @rels in a secondary document (using the HTML
> @profile[3] pattern) and engineering clients to "consume" these
> profile documents, parse them, and "apply" the results to the generic
> media type. This is a mimic of "CSS" but for app-level semantics. I've
> built some trivial clients that are capable of applying different
> app-level profiles to the same generic media type and accomplishing
> the desired tasks.
Similar to this one here?
http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/
I find this approach interesting as well.
Juergen
--
Juergen Brendel
MuleSoft
Juergen Brendel wrote: > > Yes, I did notice that: You try to design HATEOAS into your API and > before you know it you start to invent your own @rel tags, since that > seems to be so easy and expressive. I understand your concern about > this. I have to admit that I have done the custom-rel thing myself, > since I found the existing lists of available definitions > unsatisfactory. > My problem isn't with inventing new link relations. My problem is with magical link relations, which when encountered, signify processing behavior. A relation like rel='edit' is standardized (in this case, as part of Atom Protocol), so it means the same thing across all data types. If, in your custom data type, it means something that link relations don't mean in any other data type, you're not doing REST. -Eric
Juergen:
<snip>
> Really? I mean, is that realistic for a lot of cases? If you really need
> to support multiple protocols then sure, you don't want that sort of
> thing here and you want an agnostic type.
</snip>
No, it's not "realistic for a lot of cases." I am not trying to solve
"a lot" of cases, just the ones set before me.
<snip>
But I'm just trying to
> remember what the equivalent of PUT vs. POST would be for FTP, or the
> 'location' HTTP response header, just as an example. Actually, it's not
> so much about 'PUT' vs. 'POST', but about update-existing (or
> create-at-location) vs.
> create-in-collection-and-let-server-determine-location.
</snip>
HTTP POST = FTP STOR
HTTP PUT = FTP STOU[1]
<snip>
> It's nice if it all works with a multitude of protocols, but aren't we
> trying to take advantage of the built-in capabilities of a particular
> protocol, in this case the most commonly used protocol on the Internet
> (HTTP) in our quest for scalability and ease of use? And 'in the real
> world' (I'm actually mostly thinking about public APIs, so admittedly
> it's more 'my real world'), I would have thought that non-HTTP is not
> much of a concern in this context. Not saying YOU should only concern
> yourself with HTTP, clearly you'll have your reasons why you need those
> other ones.
</snip>
if "we" means you (Juergen) and me (Mike) and you mean _all the time_,
then "No."
I have a number of customers who prefer (some require) FTP for data
transfer. And not always first-level representations (i.e. _file_
transfer), but also for representation transfers. I understand that
this is may not be common; some may find it objectionable, etc. as for
XMPP, I;m not the dev on that item so can only say that some requests
we handle are over XMPP streams[2] and it helps when we can use the
same XML representation formats within the XMPP streams as we do for
XHTML, custom XML-based media types, etc.
Again, I don't claim this is "the norm" or should be. I'm just saying
that it works for what I do and that I think it has value since it
solves the problems before me.
<snip>
> Nevertheless, if you take 'method' out then you could just say that
> "rel=self" implies GET (or equivalent), while "rel=edit" implies PUT (or
> equivalent). If that's defined out of band in the media type definition
> then I guess you really don't need it here.
</snip>
Again, i rarely use @rel to signal protocol semantics (edit, update,
delete, etc.). Instead, in my own designs i usually use _elements_
(not attributes) for this purpose. And, as I've mentioned earlier, I
tend to use general terms that do not leak protocol specifics:
<write href="..." />
<search href="..." />
<remove .href="..." />
In the past I have used a variation that smells quite a bit like @rel
to signal protocol semantics:
<send action="write|update|remove|search|..." href="..." />
In both cases this representation model allows for an _additional_
@rel to indicate app-level semantics:
<write href="..." rel="customer" />
<send action="update" rel="order" />
and so on...
When I use XHTML for representations, I don't override any of the
protocol semantics of existing elements (A, LINK, IMG, FORM, IFRAME,
etc.). I _do_ however add app-level semantics to these elements using
@rel
<form method="post" action="..." rel="customer">
Finally, I suspect i've hijacked the thread here. Feel free to start a
new thread if you want to continue this or ping me offline.
Thanks.
[1] http://www.faqs.org/rfcs/rfc959.html (anyone have a better link?)
[2] http://xmpp.org/rfcs/rfc3920.html
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Mon, Jan 3, 2011 at 20:19, Juergen Brendel
<juergen.brendel@...> wrote:
>
> Hello!
>
> On Mon, 2011-01-03 at 19:50 -0500, mike amundsen wrote:
>> <snip>
>> > "links" : [ {
>> > "href" : "/foo/bar",
>> > "rel" : "self",
>> > "method" : "GET"
>> > },
>> > {
>> > "href" : "/foo/bar",
>> > "rel" : "edit",
>> > "method" : "PUT",
>> > "media-type" : "application/something"
>> > }
>> > ]
>> </snip>
>> This is an example of what I avoid. I do not encode protocol details
>> (method/media type) in my media types themselves or in the
>> representations made w/ these types. I know that HTML does this w/
>> FORM[1] elements (@method & @enc-type). There are times when I may do
>> this, but it is rarely needed for the work I am doing. One reason I
>> keep protocol details out of the [representation|media types] is that
>> I favor media types that are "protocol-agnostic." IOW, it should be
>> possible to use HTTP, FTP, XMPP, etc. with the same media type and
>> accomplish the same tasks.
>
> Really? I mean, is that realistic for a lot of cases? If you really need
> to support multiple protocols then sure, you don't want that sort of
> thing here and you want an agnostic type. But I'm just trying to
> remember what the equivalent of PUT vs. POST would be for FTP, or the
> 'location' HTTP response header, just as an example. Actually, it's not
> so much about 'PUT' vs. 'POST', but about update-existing (or
> create-at-location) vs.
> create-in-collection-and-let-server-determine-location.
>
> It's nice if it all works with a multitude of protocols, but aren't we
> trying to take advantage of the built-in capabilities of a particular
> protocol, in this case the most commonly used protocol on the Internet
> (HTTP) in our quest for scalability and ease of use? And 'in the real
> world' (I'm actually mostly thinking about public APIs, so admittedly
> it's more 'my real world'), I would have thought that non-HTTP is not
> much of a concern in this context. Not saying YOU should only concern
> yourself with HTTP, clearly you'll have your reasons why you need those
> other ones.
>
> Nevertheless, if you take 'method' out then you could just say that
> "rel=self" implies GET (or equivalent), while "rel=edit" implies PUT (or
> equivalent). If that's defined out of band in the media type definition
> then I guess you really don't need it here.
>
>
>> As for the value of including a media-type string for a link within
>> the representation (this has been discussed here several times, I
>> can't find any links right now), I find this practice needlessly
>> "locks" the client into expecting the same representation format for a
>> link. I note that HTML currently has @type[2] as a way to give clients
>> a _hint_ on what media type might be at the other end of the link.
>
> Not sure I understand. Are you talking about a GET request having its
> media type apparently implied based on the indicated content type for a
> PUT request? If a client would see it that way, then they have certainly
> taken it too far, since the type was specific only to the 'edit' link.
> The 'self' link could possibly have its own media type indication.
> Realistically, it would be a list of types in each case, because of
> conneg.
>
>
>> I've never bought into the notion that app-specific media types are
>> bad. That is my opinion and nothing more. I can only say that, for the
>> use cases I have encountered, [specific|custom] media types work quite
>> well and I have no immediate plans to stop [using|creating] them as I
>> find the need.
>
> Yes, and I would agree with you. It just seems that there are very
> strong opinions on both sides of this issue.
>
>
>> <snip>
>> > I also think it's useful to design a very generic type (such as XML with
>> > links or JSON with links) along with a decent set of generally useful
>> > rel values. Once that's done, this might be enough for some applications
>> > already, while other apps can define their custom media types merely by
>> > defining additional 'rel' values for that generic type, rather than a
>> > brand new type from scratch? Maybe that's what this whole discussion
>> > about micro-types is about.
>> </snip>
>> In my current work, I use XHTML (parsed as XML for some clients) to
>> handle the protocol details (H Factors[3] is what I call these) and
>> @rel values for app-level details. I've been experimenting w/ encoding
>> the details of @rels in a secondary document (using the HTML
>> @profile[3] pattern) and engineering clients to "consume" these
>> profile documents, parse them, and "apply" the results to the generic
>> media type. This is a mimic of "CSS" but for app-level semantics. I've
>> built some trivial clients that are capable of applying different
>> app-level profiles to the same generic media type and accomplishing
>> the desired tasks.
>
> Similar to this one here?
>
> http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/
>
>
> I find this approach interesting as well.
>
>
> Juergen
>
>
>
> --
> Juergen Brendel
> MuleSoft
>
>
Juergen Brendel wrote: > > Similar to this one here? > > http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/ > > > I find this approach interesting as well. > Interesting, fine, but it doesn't begin to be the same style as REST. In this custom data type, apparently link relations have quite a different meaning than they do in any other data type. What method to use on what URI of interest is expressed in-band via hypertext, not out-of-band via magical processing rules for link relations unique to the custom data type -- not in REST, anyway. -Eric
mike amundsen wrote: > > I've never bought into the notion that app-specific media types are > bad. That is my opinion and nothing more. I can only say that, for the > use cases I have encountered, [specific|custom] media types work quite > well and I have no immediate plans to stop [using|creating] them as I > find the need. > Saying something is or isn't REST is not a value judgment. If your needs aren't served by REST, that's fine and I'm not one to override your opinion there. But REST clearly and repeatedly states that it's a style based on constraining data elements to a limited set of standardized types. Nobody has explained to me how it's following REST to ignore this... "The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs." ...in favor of developing optimized data types on a per-application basis, except to state that it's only my "opinion" that REST indeed says exactly what I keep saying it says. That paragraph does NOT read, "There is no trade-off, since the uniform interface is optimized for transferring data in application-specific forms." I still fail to see how both diametrically-opposed approaches can possibly be based on the same thesis. Roy's examples have never been based on the creation of custom data/media types. If REST were worded in a completely different way than it is, and Roy's examples always (instead of never) started with the creation of custom types, then I'd be wrong. But, REST and Roy are both quite clear that the style is NOT about creating custom media/data types, but is in fact the exact opposite of that approach. Following REST means taking the thesis literally, in the dozen places where it calls for STANDARDIZED types. It has yet to be explained to me how it's possible that using custom types is the same style, or even remotely related to Roy's thesis. I didn't define REST, I'm just stating the obvious, which is so carefully spelled out as to leave no room to claim that this is only "my opinion". Seriously, what's the argument in favor of calling customized types the same style as REST, again? As I recall, the argument was a bunch of ad- hominems thrown my direction backed up by pointing to O'Reilly books as normative reference. I refer again to the thesis, and ask where it says anything about data types *without* emphasizing standardization in the same breath. -Eric
Peter Williams wrote: > > Today, custom media types are the best bet for providing robust, > evolvable, scalable and easy to understand (and implement) http > interfaces for m2m style interactions (even on the public internet). > Disagree. As with RDF and AJAX, I don't believe the evolving requirements of what we want to do with the Web justifies abandoning architectural fundamentals. REST is a uniform interface, not a unique interface. The benefits you cite are, in REST, described as the result of a network-based uniform interface. What is the rationale for claiming that the same benefits result from library-based unique interfaces? REST says that any efficiencies gained from a library-based unique interface come at the expense of a whole slew of desirable characteristics which *only* result when such optimizations are traded off in favor of a network-based uniform interface. I don't see how avoiding that fundamental tradeoff is the same as the REST style, which is based on making that specific tradeoff. It's like saying REST is in error, and that by relaxing the constraint of standardized data types the result is the holy grail of architectural styles where all the benefits of REST apply with none of the tradeoffs -- that's a thesis I haven't read. I have no problem with the notion that suitable generic m2m types don't exist (my advocacy of RDFa + HTML *is* an opinion, but one which fits exactly with REST in the here-and-now). My problem is that the work being done isn't going towards the creation of such solutions, opting instead to mint application-specific types and call it the same approach as REST (when the reality is that it's the difference between being network-based and library-based, only one of which is uniform). If your RESTful system is based on new generic types intended for standardization, as an example of how those generic types are used to instantiate an application-specific solution, then your system exemplifies the extension of the uniform interface to encompass evolving concerns. But that isn't what I see out there, which are unique interfaces optimized for the application at hand -- abandoning the uniform interface because it has yet to be extended to encompass these concerns. Which is a different architectural style altogether, where data elements aren't constrained to be standardized -- which is the fundamental requirement of REST, the benefits of which aren't expected to apply otherwise. -Eric
Nathan wrote: > > >> I'll probably black-list myself from the rest community with this > >> next comment, but can this whole culture of run to a custom media > >> type whenever things get tricky and label it as RESTful and "a > >> good thing" culture please just stop, it's a new year ahead, it's > >> painfully obvious that it /doesn't/ work (in reality) > > > +1, REST opposes application-specific media/data type proliferation; it *is* painfully obvious that neatly excising "standardized" everywhere it prefaces "data types" or "media types" changes the meaning of the thesis to a point where you've described some other architectural style. > > Jan Algermissen wrote: > > If we want to build user agents that perform automatic requests > > (such as downloading the image referenced by an HTML <img> tag) we > > simply *need* hypermedia semantics that express the specific > > relationship driving the user agent code. > > > > There are only two options for this: extending existing types or > > using link relations *or* defining a specific media type (such as > > HTML is a specific type for enabling what we want browsers to do). > > There's a third option, design a new generic data media type which > has a core set of hypermedia semantics which can be applied to > properties. > > If there was ever a group of people who could do it (provided you get > the correct data model) it's you guys, and you, and we, all need it... > > Surely that third option would be far more beneficial to concentrate > than all of these old far-from-perfect approaches? I'd certainly > contribute to any such effort wherever I could.. > +1. Mo'bettah to have 1,000 folks work together to standardize new generic types which address general deficiencies from the m2m perspective, than to let 1,000 new custom types bloom. The latter clearly goes against the whole point of REST, which is all about the benefits of the former (network-based shared understanding). -Eric
On Jan 3, 2011, at 6:22 PM, Eric J. Bowman wrote: > I still fail to see how both diametrically-opposed approaches can > possibly be based on the same thesis. Roy's examples have never been > based on the creation of custom data/media types. If REST were worded > in a completely different way than it is, and Roy's examples always > (instead of never) started with the creation of custom types, then I'd > be wrong. But, REST and Roy are both quite clear that the style is NOT > about creating custom media/data types, but is in fact the exact > opposite of that approach. Do keep in mind that, in order to have an *evolving* set of standard media types (and likewise standard link relations), there will have to be new media types being created and old media types fading away over time. One person's experiment may well become another person's standard, in the long run, and REST is very much about the long run. So we should consider new media types in terms of their long-term intent rather than their immediate status. Likewise, standards are a byproduct of authority. WalMart, for example, is fully capable of defining standards within its own logistical network, so there are ecosystems in which the architectural style being used can be REST even though the standards aren't the same as those on the open Internet. ....Roy
"Roy T. Fielding" wrote: > > Do keep in mind that, in order to have an *evolving* set of standard > media types (and likewise standard link relations), there will have > to be new media types being created and old media types fading away > over time. > Nobody's really disputing that, the nature of the debate centers around how orderly that process should be. > > One person's experiment may well become another person's standard, > in the long run, and REST is very much about the long run. So we > should consider new media types in terms of their long-term intent > rather than their immediate status. > Which is the heart of the debate -- creating application-specific types in response to the needs of the moment is short-sighted, yet this is done as a matter of course, and justified in the name of REST. > > Likewise, standards are a byproduct of authority. WalMart, for > example, is fully capable of defining standards within its own > logistical network, so there are ecosystems in which the architectural > style being used can be REST even though the standards aren't the > same as those on the open Internet. > It does tend to get lost in these long threads, that we're talking about the common case of the Web. The problem arises when those seeking advice on open-Web APIs are led to solutions which are not standardized for that context, by folks who refuse to concede that it makes any difference. -Eric
On Tue, Jan 4, 2011 at 11:49 AM, Eric J. Bowman <eric@...> wrote: > "Roy T. Fielding" wrote: >> >> Do keep in mind that, in order to have an *evolving* set of standard >> media types (and likewise standard link relations), there will have >> to be new media types being created and old media types fading away >> over time. >> > > Nobody's really disputing that, the nature of the debate centers around > how orderly that process should be. > Of course nobody's disputing that. It should be obvious. You're being asked to /keep it in mind/ when considering "how orderly" the evolution should be. The rate a which a media type will go from new to faded away will be determined by the value it provides to the system. Basically, you are getting your knickers in a twist about nothing because (assuming you are correct, of course) application-specific media types are essentially superfluous, will not survive, and therefore won't have any long-term impact on the system. Not only that, there's also a chance that during the 'chaos' some useful long-lasting types do emerge and survive as the fittest. That is evolution. Evolution doesn't require - or benefit from - careful "expert" orchestration by militant prophets of The One True Way. Sorry. Cheers, Mike >> >> One person's experiment may well become another person's standard, >> in the long run, and REST is very much about the long run. So we >> should consider new media types in terms of their long-term intent >> rather than their immediate status. >> > > Which is the heart of the debate -- creating application-specific types > in response to the needs of the moment is short-sighted, yet this is > done as a matter of course, and justified in the name of REST. > >> >> Likewise, standards are a byproduct of authority. WalMart, for >> example, is fully capable of defining standards within its own >> logistical network, so there are ecosystems in which the architectural >> style being used can be REST even though the standards aren't the >> same as those on the open Internet. >> > > It does tend to get lost in these long threads, that we're talking > about the common case of the Web. The problem arises when those > seeking advice on open-Web APIs are led to solutions which are not > standardized for that context, by folks who refuse to concede that it > makes any difference. > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Mike Kelly wrote: > > Eric J. Bowman wrote: > > "Roy T. Fielding" wrote: > >> > >> Do keep in mind that, in order to have an *evolving* set of > >> standard media types (and likewise standard link relations), there > >> will have to be new media types being created and old media types > >> fading away over time. > >> > > > > Nobody's really disputing that, the nature of the debate centers > > around how orderly that process should be. > > > > Of course nobody's disputing that. It should be obvious. You're being > asked to /keep it in mind/ when considering "how orderly" the > evolution should be. > I'm asking you to keep in mind that truly useful new types don't happen by chance -- throwing a bunch of application-specific types against the wall to see what sticks is the opposite of the principled design REST represents. > > The rate a which a media type will go from new to faded away will be > determined by the value it provides to the system. > I don't care how much value a custom type adds to the system, if the result is a library-based unique interface that's never intended to be standardized, then it isn't part of the uniform interface, never will be, and is therefore not the same thing as the REST style. The value of any media/data type to REST development is a product of how useful it is *in general* not how well-optimized it is for the application at hand. > > Basically, you are getting your knickers in a twist about nothing > because (assuming you are correct, of course) application-specific > media types are essentially superfluous, will not survive, and > therefore won't have any long-term impact on the system. Not only > that, there's also a chance that during the 'chaos' some useful > long-lasting types do emerge and survive as the fittest. That is > evolution. > You describe chaos, where out of 1,000 new types one or two may accidentally prove to have value in a generic fashion despite not having been designed to be generic. What data type which *is* part of the uniform interface was designed in such an accidental, haphazard fashion? The useful types, over the long-term, are those that are designed to be of general use and subjected to peer review through a standards process. Where are the benefits of REST, in a system composed of custom types that won't survive? Surely there's less long-term impact to a system that doesn't need to be refactored in order to ever achieve those benefits. > > Evolution doesn't require - or benefit from - careful "expert" > orchestration by militant prophets of The One True Way. Sorry. > Exactly the opposite of what the thesis implies by referring to this: "A registration process is needed, however, to ensure that the set of such values is developed in an orderly, well-specified, and public manner." Are you suggesting that the ietf-types list and the registry are architecturally unsound? The standards tree is based on exactly the expert orchestration you decry. Evolution of new standardized types happens by design, not random chance based on disorderly, unspecified and private processes. REST and HTTP offer coherent arguments against throwing new types at the wall to see what sticks, which is a weak argument in support of creating custom types for every application, unguided by any process and ignorant of existing standardized types capable of solving the same problem *without* abandoning the uniform interface by hoping against reason that types developed in such fashion might accidentally represent an improvement and be widely adopted. -Eric
Roy T. Fielding wrote: > On Jan 3, 2011, at 6:22 PM, Eric J. Bowman wrote: >> I still fail to see how both diametrically-opposed approaches can >> possibly be based on the same thesis. Roy's examples have never been >> based on the creation of custom data/media types. If REST were worded >> in a completely different way than it is, and Roy's examples always >> (instead of never) started with the creation of custom types, then I'd >> be wrong. But, REST and Roy are both quite clear that the style is NOT >> about creating custom media/data types, but is in fact the exact >> opposite of that approach. > > Do keep in mind that, in order to have an *evolving* set of standard > media types (and likewise standard link relations), there will have > to be new media types being created and old media types fading away > over time. > > One person's experiment may well become another person's standard, > in the long run, and REST is very much about the long run. So we > should consider new media types in terms of their long-term intent > rather than their immediate status. Indeed, you often need the conflicting implementations in order to standardize, however I think the key conceptual difference Eric has been trying to get across is summed up in your statement above, that we "should consider new media types in terms of their long-term intent", the distinction being that a media type being created with the long term in mind, with the principals like separation of concerns and web/internet-scale usage in mind, is RESTful (in the context of the web); whilst just dumping out an object from an application in XML, registering a +xml vnd type and creating a custom client, is not. Is that a fair comment / distinction to make? Best, Nathan
Dear All, The response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. This sentence in "201 Created" status code section of HTTP specification makes me confused. What's the meaning of resource characteristics? Thanks. Best regards, Zhi-Qiang Lei zhiqiang.lei@gmail.com
I'm designing the interface of a system that stores and indexes documents (word docs etc). My intention is that the service should be human readable as well as easy to integrate programmatically - a resource-oriented REST interface would be perfect. Multiple documents in the same 'bucket' could have the same file name, so I an using a unique identifer in my URLs, e.g. GET /documents/sales/6543213 Really, that resource should be uploaded by a corresponding PUT /documents/sales/6543213. However, I'm using a enterprise document management system behind that scenes which creates these identifiers, so the user cannot decide what their unique ID will be. I guess I could handle a PUT to /documents/sales/ and return a 301 to the new resource? Would that be the correct way to handle it?
<snip> > Really, that resource should be uploaded by a corresponding PUT /documents/sales/6543213. However, I'm using a enterprise document management system behind that scenes which creates these identifiers, so the user cannot decide what their unique ID will be. </snip> If some other component is determining the URI, you should not be using PUT[1] at all. PUT assumes at complete (exact) URI and the operation is idempotent (repeatable with the same basic results). If the "back-end" is creating the actual URIs on each "submit" then this is not a candidate for the HTTP PUT method. Instead, use POST and return "201 Created" w/ the URI that was "minted" by the component in the Location header. [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.6 mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Sun, Jan 2, 2011 at 03:24, jlanng <james@...> wrote: > I'm designing the interface of a system that stores and indexes documents (word docs etc). My intention is that the service should be human readable as well as easy to integrate programmatically - a resource-oriented REST interface would be perfect. > > Multiple documents in the same 'bucket' could have the same file name, so I an using a unique identifer in my URLs, e.g. GET /documents/sales/6543213 > > Really, that resource should be uploaded by a corresponding PUT /documents/sales/6543213. However, I'm using a enterprise document management system behind that scenes which creates these identifiers, so the user cannot decide what their unique ID will be. > > I guess I could handle a PUT to /documents/sales/ and return a 301 to the new resource? Would that be the correct way to handle it? > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Talking about "201 Created" ... I just implemented that in a HTML forms page to make it easier (or, at least, clearer) for M2M interaction. Normally, on the HTML web, a successfull creation redirects uses "302 Found" which will send the browser to the newly created resource. Unfortunately the same doesn't happen with "201 Created" - the browser (FF and IE) ignores the Location header and stays on the same page. Have any of you found a good way around this issue? I decided to use "302 Found" instead and document that as the "Created" result. /J�rn ----- Original Message ----- From: "mike amundsen" <mamund@...> To: "jlanng" <james@...> Cc: <rest-discuss@yahoogroups.com> Sent: Friday, January 14, 2011 1:17 AM Subject: Re: [rest-discuss] PUTting an unnamed resource <snip> > Really, that resource should be uploaded by a corresponding PUT > /documents/sales/6543213. However, I'm using a enterprise document > management system behind that scenes which creates these identifiers, so > the user cannot decide what their unique ID will be. </snip> If some other component is determining the URI, you should not be using PUT[1] at all. PUT assumes at complete (exact) URI and the operation is idempotent (repeatable with the same basic results). If the "back-end" is creating the actual URIs on each "submit" then this is not a candidate for the HTTP PUT method. Instead, use POST and return "201 Created" w/ the URI that was "minted" by the component in the Location header. [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.6 mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Sun, Jan 2, 2011 at 03:24, jlanng <james@...> wrote: > I'm designing the interface of a system that stores and indexes documents > (word docs etc). My intention is that the service should be human readable > as well as easy to integrate programmatically - a resource-oriented REST > interface would be perfect. > > Multiple documents in the same 'bucket' could have the same file name, so > I an using a unique identifer in my URLs, e.g. GET > /documents/sales/6543213 > > Really, that resource should be uploaded by a corresponding PUT > /documents/sales/6543213. However, I'm using a enterprise document > management system behind that scenes which creates these identifiers, so > the user cannot decide what their unique ID will be. > > I guess I could handle a PUT to /documents/sales/ and return a 301 to the > new resource? Would that be the correct way to handle it? > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Jørn: For "201 Created" responses I usually return an entity body with the same link as the Location header. HTTP/1.1 201 Created Location: http://example.com/resources/123 Content-Type:text/html ... <a href="http://example.com/resources/123" rel="location">201 Created</a> ... Then clients (humans and/or scripts) can check out the Location header or look for the rel="location" link and act accordingly. I think I've used a <head><link rel="location" ... /> in the past, too. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Fri, Jan 14, 2011 at 00:37, Jørn Wildt <jw@...> wrote: > Talking about "201 Created" ... I just implemented that in a HTML forms page > to make it easier (or, at least, clearer) for M2M interaction. Normally, on > the HTML web, a successfull creation redirects uses "302 Found" which will > send the browser to the newly created resource. Unfortunately the same > doesn't happen with "201 Created" - the browser (FF and IE) ignores the > Location header and stays on the same page. > > Have any of you found a good way around this issue? > > I decided to use "302 Found" instead and document that as the "Created" > result. > > /Jørn > > ----- Original Message ----- From: "mike amundsen" <mamund@yahoo.com> > To: "jlanng" <james@...> > Cc: <rest-discuss@yahoogroups.com> > Sent: Friday, January 14, 2011 1:17 AM > Subject: Re: [rest-discuss] PUTting an unnamed resource > > > <snip> >> >> Really, that resource should be uploaded by a corresponding PUT >> /documents/sales/6543213. However, I'm using a enterprise document >> management system behind that scenes which creates these identifiers, so the >> user cannot decide what their unique ID will be. > > </snip> > If some other component is determining the URI, you should not be > using PUT[1] at all. PUT assumes at complete (exact) URI and the > operation is idempotent (repeatable with the same basic results). If > the "back-end" is creating the actual URIs on each "submit" then this > is not a candidate for the HTTP PUT method. Instead, use POST and > return "201 Created" w/ the URI that was "minted" by the component in > the Location header. > > [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.6 > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2010 > http://rest-fest.googlecode.com > > > > > On Sun, Jan 2, 2011 at 03:24, jlanng <james@....uk> wrote: >> >> I'm designing the interface of a system that stores and indexes documents >> (word docs etc). My intention is that the service should be human readable >> as well as easy to integrate programmatically - a resource-oriented REST >> interface would be perfect. >> >> Multiple documents in the same 'bucket' could have the same file name, so >> I an using a unique identifer in my URLs, e.g. GET /documents/sales/6543213 >> >> Really, that resource should be uploaded by a corresponding PUT >> /documents/sales/6543213. However, I'm using a enterprise document >> management system behind that scenes which creates these identifiers, so the >> user cannot decide what their unique ID will be. >> >> I guess I could handle a PUT to /documents/sales/ and return a 301 to the >> new resource? Would that be the correct way to handle it? >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > >
On Thu, Jan 13, 2011 at 9:57 AM, Zhi-Qiang Lei <zhiqiang.lei@...>wrote: > > The response SHOULD include an entity containing a list of resource > characteristics and location(s) from which the user or user agent can choose > the one most appropriate. > > This sentence in "201 Created" status code section of HTTP specification > makes me confused. What's the meaning of resource characteristics? Thanks. > > I'll take a stab at this: I would interpret this as serving a response that allows agent driven negotiation. So if a POST resulted in four resources created, the server could list all of them, along with the characteristics that differentiate them. Characteristics would be anything; media types, link semantics or (for human user agents) plain text explaining what the links are. -- -mogsie-
So are they the links to new resources but contents of new resources? On Jan 14, 2011, at 2:10 PM, Erik Mogensen wrote: > > On Thu, Jan 13, 2011 at 9:57 AM, Zhi-Qiang Lei <zhiqiang.lei@...> wrote: > The response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. > > This sentence in "201 Created" status code section of HTTP specification makes me confused. What's the meaning of resource characteristics? Thanks. > > > I'll take a stab at this: I would interpret this as serving a response that allows agent driven negotiation. So if a POST resulted in four resources created, the server could list all of them, along with the characteristics that differentiate them. > > Characteristics would be anything; media types, link semantics or (for human user agents) plain text explaining what the links are. > -- > -mogsie- Best regards, Zhi-Qiang Lei zhiqiang.lei@...
On Sat, Jan 15, 2011 at 9:00 AM, Zhi-Qiang Lei <zhiqiang.lei@...>wrote: > > > So are they the links to new resources but contents of new resources? > A 201 response should contain links to new resources, but if it contains excerpts of the resources then that should be OK too. I would think the links are more important, though. -- -mogsie-
I also recommend that your POST response should provide links, albeit for more practical reasons. If you wish to return multiple representations of the newly-created resource in the POST response, the response would have to be "multipart/alternative" [http://tools.ietf.org/html/rfc2046#section-5.1.4]. Your client(s) may not support this at all or well and it might not be possible or easy to construct on the server depending on what you are using. Further, if the POST *request* had specified an Accept header, you could return a response of that content type and include links to the alternative representations in it. Shaunak On Sat, Jan 15, 2011 at 7:25 AM, Erik Mogensen <erik@...> wrote: > > > On Sat, Jan 15, 2011 at 9:00 AM, Zhi-Qiang Lei <zhiqiang.lei@...>wrote: > >> >> >> So are they the links to new resources but contents of new resources? >> > > A 201 response should contain links to new resources, but if it contains > excerpts of the resources then that should be OK too. I would think the > links are more important, though. > -- > -mogsie- > > > -- "Now the hardness of this world slowly grinds your dreams away / Makin' a fool's joke out of the promises we make" --- Bruce Springsteen, "Blood Brothers"
If there are more than one resources were created, what should the Location header be? Thanks. On Jan 15, 2011, at 11:25 PM, Erik Mogensen wrote: > On Sat, Jan 15, 2011 at 9:00 AM, Zhi-Qiang Lei <zhiqiang.lei@...> wrote: > > So are they the links to new resources but contents of new resources? > > > A 201 response should contain links to new resources, but if it contains excerpts of the resources then that should be OK too. I would think the links are more important, though. > -- > -mogsie- Best regards, Zhi-Qiang Lei zhiqiang.lei@...
Dear All,
In my application, there are a kind of ticket resources (URI: /tickets/{ticket-id}), and the ticket behaves like a state machine (status: pending -> approved or pending -> deny). Could you tell me what is a RESTful way to approve the tickets? Because there is no APPROVE manner in HTTP. Thanks in advance.
Best regards,
Zhi-Qiang Lei
zhiqiang.lei@...
On Jan 17, 2011, at 2:53 PM, Zhi-Qiang Lei wrote:
> Dear All,
>
> In my application, there are a kind of ticket resources (URI: /tickets/{ticket-id}), and the ticket behaves like a state machine (status: pending -> approved or pending -> deny). Could you tell me what is a RESTful way to approve the tickets? Because there is no APPROVE manner in HTTP. Thanks in advance.
>
You need to define hypermedia semantics (a media type or link relation) that tell the user agent what request to initiate to achieve the desired result (approval).
One way to do this would be through a dedicated ITSM-tailored media type [1]. Such a media type could define status values for a status property of resources (e.g. tickets). You could do sth like this:
GET /tickets/56
200 Ok
Content-Type: application/itsm
<ticket>
<type="Incident">
<status url="./status">pending</status>
....
</ticket>
You could then
PUT /tickets/56/status
Content-Type: text/plain
approved
Another possibility would be through a Link relation that tells you where the status property is:
GET /tickets/56
200 Ok
Link: </tickets/56/status/>;rel=status
Content-Type: application/itsm
<ticket>...</ticket>
You could then do
PUT /tickets/56/status
Content-Type: text/plain
HTH,
Jan
[1] Would be a nice REST-show case, IMHO, because ITSM as a domain suits REST pretty well, I think.
> Best regards,
> Zhi-Qiang Lei
> zhiqiang.lei@...
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
Zhi-Qiang Lei:
Sounds like you need to change the state of the ticket.
You can do this using PUT to replace the current ticket state w/ the
new ticket state.
PUT /ticket/123 ("direct resource")
...
... ticket state ...
Possible advantages: clients are dealing with the existing ticket and
intermediaries will recognize the updated state of the ticket very
easily.
Possible disadvantages: if the ticket state is large, you must pass
quite a bit of data to simply "approve" the ticket.
You can do this using POST to add a new ticket-status which will
update the ticket state
POST /ticket/123/status ("sub-resource")
OR
POST /ticket/status?ticket=123 ("controller resource")
...
... "approved" ...
Possible advantages: clients pass only the data that is changed and
you get an easy "audit trail" of all state changes that can be
retrieved, replayed, etc.
Possible disadvantages: this is an indirect way to affect the ticket
resource that intermediaries will not easy understand.
These are the first two possibilities that come to mind. There may be others.
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Mon, Jan 17, 2011 at 08:53, Zhi-Qiang Lei <zhiqiang.lei@...> wrote:
> Dear All,
>
> In my application, there are a kind of ticket resources (URI: /tickets/{ticket-id}), and the ticket behaves like a state machine (status: pending -> approved or pending -> deny). Could you tell me what is a RESTful way to approve the tickets? Because there is no APPROVE manner in HTTP. Thanks in advance.
>
> Best regards,
> Zhi-Qiang Lei
> zhiqiang.lei@gmail.com
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On Mon, Jan 17, 2011 at 2:53 PM, Zhi-Qiang Lei <zhiqiang.lei@...>wrote:
> In my application, there are a kind of ticket resources (URI:
> /tickets/{ticket-id}), and the ticket behaves like a state machine (status:
> pending -> approved or pending -> deny). Could you tell me what is a RESTful
> way to approve the tickets? Because there is no APPROVE manner in HTTP.
> Thanks in advance.
>
I guess this will be heavily debated, but in a strictly RESTful interface,
the client should discover the available options (state transitions). So if
your media type is HTML, your initial GET request might be
GET /tickets/1234
200 OK
<form method="post">
<input type="hidden" name="approve" value="true">
Click this button to approve this ticket <input type="submit">
</form>
(not the prettiest UI but it gets the point across.)
Using the above HTML page, the client will provide enough information to the
user what happens when he clicks the button, and clicking the button will
result in a HTTP POST to /ticket/1234 with "approve=true" in the body. The
response could be a redirect back to /ticket/1234 which would show the
updated resource state.
The client just transitioned the state of the ticket without knowing what
that ticket states are.
If your server has more specific media types than HTML, e.g.
application/x-ticket or whatever, then the definition of
application/x-ticket would typically provide similar documentation
indicating that "the presence of <link rel=approve href=.../> indicates that
the client should POST something to that href to approve the ticket." This
would require the knowledge of application/x-ticket and its processing
requirements.
Atom does this, e.g. for <link rel="edit"> clients are expected to GET and
PUT to modify items. <app:collection> defines that the URI allows POST,
which creates a new entry in the referenced collection.
HTML defines <form action=xxx method=post> to instruct clients to perform
HTTP POST.
The primary example of this would probably be the sun cloud API where the
media types instruct to POST specific JSON structures to discovered URIs in
order to perform quite high level operations (like turning on a machine). <
http://kenai.com/projects/suncloudapis/pages/Home>
--
-mogsie-
Thanks. Your PUT example inspires me much. I always thought that representation should be key-value pairs.
On Jan 17, 2011, at 10:40 PM, Jan Algermissen wrote:
>
> On Jan 17, 2011, at 2:53 PM, Zhi-Qiang Lei wrote:
>
> > Dear All,
> >
> > In my application, there are a kind of ticket resources (URI: /tickets/{ticket-id}), and the ticket behaves like a state machine (status: pending -> approved or pending -> deny). Could you tell me what is a RESTful way to approve the tickets? Because there is no APPROVE manner in HTTP. Thanks in advance.
> >
>
> You need to define hypermedia semantics (a media type or link relation) that tell the user agent what request to initiate to achieve the desired result (approval).
>
> One way to do this would be through a dedicated ITSM-tailored media type [1]. Such a media type could define status values for a status property of resources (e.g. tickets). You could do sth like this:
>
> GET /tickets/56
>
> 200 Ok
> Content-Type: application/itsm
>
> <ticket>
> <type="Incident">
> <status url="./status">pending</status>
> ....
> </ticket>
>
> You could then
>
> PUT /tickets/56/status
> Content-Type: text/plain
>
> approved
>
> Another possibility would be through a Link relation that tells you where the status property is:
>
> GET /tickets/56
>
> 200 Ok
> Link: </tickets/56/status/>;rel=status
> Content-Type: application/itsm
>
> <ticket>...</ticket>
>
> You could then do
>
> PUT /tickets/56/status
> Content-Type: text/plain
>
> HTH,
>
> Jan
>
> [1] Would be a nice REST-show case, IMHO, because ITSM as a domain suits REST pretty well, I think.
>
> > Best regards,
> > Zhi-Qiang Lei
> > zhiqiang.lei@...
> >
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
>
>
Best regards,
Zhi-Qiang Lei
zhiqiang.lei@...
Dear members, Right now, I'm currently involved in a research to find the best practices to design and implement RESTful application using Restlet framework. For the Hypermedia As The Engine Of Application State (HATEOAS) or hypermedia constraints, I discover that "maybe" the best way to model it in the application design, is to use the Behavioral State Machine Diagram in UML. However, I only arrive at this conclusion by simply reading about the Behavioral State Machine Diagram (hence the quotes for the word *maybe*). I never encounter or find any expert opinion about it. I tried to search it using Google, but couldn't find any clear explanation about this. Does anyone know whether I have the right conclusion or not? Could anyone refer me to an academic reading for this matter,please? Thanks alot. Regards, Reza Lesmana.
Hi Reza, I think you're on the right track. I've come to the same "maybe" a year ago and used finite-state machines (not UML state machine diagrams) to try and model simple RESTful systems (including the HATEOAS constraint). Perhaps reading these blog posts will help (number 1 is mine): 1. (scroll down to the second part of the post) http://ivanzuzak.info/2010/04/03/why-understanding-rest-is-hard-and-what-we-should-do-about-it-systematization-models-and-terminology-for-rest.html 2. http://www.stucharlton.com/blog/archives/2010/03/building-a-restful-hypermedia.html I think there are several links to related academic papers and blog posts in the first part of my post. Contact me via email if you'd like to chat about the model or need more references to related work. I'd love to hear what you're working on. Best, Ivan On Tue, Jan 18, 2011 at 15:47, Reza Lesmana <lesmana.reza@...> wrote: > > > > Dear members, > > Right now, I'm currently involved in a research to find the best practices to design and implement RESTful application > using Restlet framework. > For the Hypermedia As The Engine Of Application State (HATEOAS) or hypermedia constraints, > I discover that "maybe" the best way to model it in the application design, > is to use the Behavioral State Machine Diagram in UML. > However, I only arrive at this conclusion by simply reading about the Behavioral State Machine Diagram > (hence the quotes for the word maybe). > I never encounter or find any expert opinion about it. I tried to search it using Google, but couldn't find any > clear explanation about this. > Does anyone know whether I have the right conclusion or not? > Could anyone refer me to an academic reading for this matter,please? > Thanks alot. > Regards, > Reza Lesmana.
Hi, regarding: "a resource R is a temporally varying membership function MR(t), which for time t maps to a set of entities, or values, which are equivalent. The values in the set may be resource representations and/or resource identifiers." - http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1_1 I can think of multiple examples in HTTP where t maps to a set of entities (server driven conneg?), and where it maps to a single value (307 + Location?) ... but I can't think of an example where t maps to a set of values (maybe agent conneg w/ 301?) and more specifically, I can't think of an example where t maps to a set which consists of entities and values. Can anybody provide (HTTP) examples for those I'm missing? Best, Nathan
Hi Ivan, Thanks for your response. I've read your blog post before, and it's the one that opened my mind about the REST concept. It's so inspiring. In fact, I'm surprise that you reply to my email. Your blog post is the one that makes me remember about the UML behavioral state machine diagram. However, since you used "finite-state machines" and not explicitly said it as "UML state machine diagram", I'm not confident enough to put it as a reference. Yesterday, after trying to spend more time to find good academic reference, I found Nora Koch's thesis ( http://www.pst.informatik.uni-muenchen.de/~kochn/PhDThesisNoraKoch.pdf ) which clearly stated that possible state transitions can be depicted using UML state diagram (page 16). Since it's written in year 2000, I concluded that the UML version that is used in the thesis is UML 1.1 or 1.3. And since then, the UML state diagram has been developed into state machine diagram in UML 2.3, which include behavioral state machine diagram. And, being confident enough to use that conclusion, I use Nora Koch's thesis as a reference. I'll contact you if I need to chat about the REST topic, and I'd love to chat about what I've been working on too. Thanks alot. Regards, Reza Lesmana On Wed, Jan 19, 2011 at 2:36 AM, Ivan Žužak <izuzak@...> wrote: > Hi Reza, > > I think you're on the right track. I've come to the same "maybe" a > year ago and used finite-state machines (not UML state machine > diagrams) to try and model simple RESTful systems (including the > HATEOAS constraint). Perhaps reading these blog posts will help > (number 1 is mine): > > 1. (scroll down to the second part of the post) > > http://ivanzuzak.info/2010/04/03/why-understanding-rest-is-hard-and-what-we-should-do-about-it-systematization-models-and-terminology-for-rest.html > 2. > http://www.stucharlton.com/blog/archives/2010/03/building-a-restful-hypermedia.html > > I think there are several links to related academic papers and blog > posts in the first part of my post. Contact me via email if you'd like > to chat about the model or need more references to related work. I'd > love to hear what you're working on. > > Best, > Ivan > > On Tue, Jan 18, 2011 at 15:47, Reza Lesmana <lesmana.reza@...> > wrote: > > > > > > > > Dear members, > > > > Right now, I'm currently involved in a research to find the best > practices to design and implement RESTful application > > using Restlet framework. > > For the Hypermedia As The Engine Of Application State (HATEOAS) or > hypermedia constraints, > > I discover that "maybe" the best way to model it in the application > design, > > is to use the Behavioral State Machine Diagram in UML. > > However, I only arrive at this conclusion by simply reading about the > Behavioral State Machine Diagram > > (hence the quotes for the word maybe). > > I never encounter or find any expert opinion about it. I tried to search > it using Google, but couldn't find any > > clear explanation about this. > > Does anyone know whether I have the right conclusion or not? > > Could anyone refer me to an academic reading for this matter,please? > > Thanks alot. > > Regards, > > Reza Lesmana. >
Nathan wrote: > > I can think of multiple examples in HTTP where t maps to a set of > entities (server driven conneg?)... > Uhhh, it's R that maps to a set, over time (t). Plus, I think Roy's wording just means value=entity, not that R maps to a value *or* an entity, as different things. > > and where it maps to a single value (307 + Location?) ... but I can't > think of an example where t maps to a set of values (maybe agent > conneg w/ 301?) and more specifically, I can't think of an example > where t maps to a set which consists of entities and values. > Given resource R, which at time (t) maps to a static file: If you are using HTTP compression, R doesn't actually map to a static file, it maps to a set of entities -- the static file and its compressed variant. Sometimes the server is overwhelmed, so for a time R will map to a 503 response entity -- just another temporal member of the set. HTH. -Eric
On Wed, Jan 19, 2011 at 5:05 AM, Eric J. Bowman <eric@...>wrote: > Given resource R, which at time (t) maps to a static file: If you are > using HTTP compression, R doesn't actually map to a static file, it maps > to a set of entities -- the static file and its compressed variant. > If the server can respond with two representations of the same resource, and the representations are semantically equivalent, wouldn't this also fit? Like a HTML representation and a DocBook representation, or a text/csv and an Excel file with the same data. > Sometimes the server is overwhelmed, so for a time R will map to a 503 > response entity -- just another temporal member of the set. HTH. > > I think that's stretching "equivalent" a bit far :-) A 401 wouldn't be equivalent, nor would a 302, IMHO. -- -mogsie-
Erik Mogensen wrote: > > > Given resource R, which at time (t) maps to a static file: If you > > are > > > using HTTP compression, R doesn't actually map to a static file, it > maps > > to a set of entities -- the static file and its compressed variant. > > > > If the server can respond with two representations of the same > resource, and the representations are semantically equivalent, > wouldn't this also fit? Like a HTML representation and a DocBook > representation, or a text/csv and an Excel file with the same data. > Yes. Nathan asked for examples; I gave one, you gave another, here's a third: http://upload.wikimedia.org/wikipedia/commons/d/dc/MrT.jpg Heheh... > > > Sometimes the server is overwhelmed, so for a time R will map to a > > 503 response entity -- just another temporal member of the set. HTH. > > I think that's stretching "equivalent" a bit far :-) A 401 wouldn't be > equivalent, nor would a 302, IMHO. > Good point, I'll have to think about that some more. -Eric
Hi Reza, see my comments inline below On Wed, Jan 19, 2011 at 04:19, Reza Lesmana <lesmana.reza@...> wrote: > Hi Ivan, > Thanks for your response. I've read your blog post before, and it's the one > that opened my mind about the REST concept. It's so inspiring. > In fact, I'm surprise that you reply to my email. > Your blog post is the one that makes me remember about the UML behavioral > state machine diagram. Thanks, that's the effect I was going for when I wrote the post. > However, since you used "finite-state machines" and not explicitly said it > as "UML > state machine diagram", I'm not confident enough to put it as a reference. Although there definitely are differences between the two formal models (FSM and UML state diagram), such as hierarchy, there is also a big overlap and both are commonly used with the same goal - to describe the behavior of complex systems. I'm not sure which references you are talking about and where you'd like to put them, but I don't see the differences between models important enough to not list something since the object/purpose of modeling is the same, and the models are similar. > Yesterday, after trying to spend more time to find good academic reference, > I found Nora Koch's thesis > ( http://www.pst.informatik.uni-muenchen.de/~kochn/PhDThesisNoraKoch.pdf ) > which clearly stated that possible state transitions can be depicted using > UML state diagram (page 16). > Since it's written in year 2000, I concluded that the UML version that is > used in the thesis is UML 1.1 or 1.3. > And since then, the UML state diagram has been developed into state machine > diagram in UML 2.3, > which include behavioral state machine diagram. > And, being confident enough to use that conclusion, I use Nora Koch's thesis > as a reference. Nice, I haven't seen this thesis before, so thanks! As you've probably notices, there aren't many papers which explicitly model REST or REST constraints, but you'll find that a lot more papers model various aspects of hypermedia (including UML state diagrams). Howeverm, I'm not an expert in UML so I'm not sure what the difference between UML state diagrams and UML state machine diagrams is so I can judge your focus on the latter vs the former. Best of luck with your research! Ivan
Hi, In my system, a client can POST a request that contains a batch of data (yeah!). I'd like to be lenient in my processing : if I find both invalid data (i.e., not respecting the expected syntax) *and* some valid data, I want to process as much as possible of the valid data (because it is valuable for my system), instead of rejecting the request upfront and not using the valid data. Still, it seems desirable to inform my client that his request was partially malformed. Now, how would you do that in the context of HTTP? Returning a 400 was my first thought. However, it means, per specification : "The request could not be understood by the server due to malformed syntax. [...]", and I wonder if I'm in that situation, given that the server understood and acted on some part of the request. Do you know a better way of handling this? Bonus question : As I realize this is more an HTTP question than a REST one, I'd also be grateful for pointers to other places where this kind of things can be discussed judiciously. Philippe Mougin http://pmougin.wordpress.com
"Philippe Mougin" wrote: > > Now, how would you do that in the context of HTTP? Returning a 400 > was my first thought. > If the server accepted the request, then no error occurred, so respond 2xx. Only respond 4xx if the server can't process the request at all. Your question is appropriate to REST -- accepting and processing the request, but responding 4xx, wouldn't be self-descriptive messaging any more than it is to send "page not found" using 200 OK. -Eric
Hello everyone, One of my colleagues has been working on implementing client side HTTP caching on a open source project of ours. He's had a hard time finding comprehensive documentation, so he put together https://github.com/kaiwren/wrest/blob/caching/Caching.markdown by working through the HTTP RFC as well as Firefox source. I'd appreciate it if those of you that know how this should work (especially in the context of REST) could take a look and let us know if there is anything that we've missed. A large portion of that document has already been implemented; we expect to release these features in about a week as a part of Wrest 1.1. Thanks, Sidu. http://c42.in
For inspiration check out my Open Source Java HTTP caching library: http://httpcache4j.codehaus.org the source is also available here: https://github.com/hamnis/httpcache4j -- Erlend On Tue, Jan 25, 2011 at 10:06 AM, Sidu Ponnappa <lorddaemon@...>wrote: > > > Hello everyone, > > One of my colleagues has been working on implementing client side HTTP > caching on a open source project of ours. He's had a hard time > finding comprehensive documentation, so he put together > https://github.com/kaiwren/wrest/blob/caching/Caching.markdown by > working through the HTTP RFC as well as Firefox source. > > I'd appreciate it if those of you that know how this should work > (especially in the context of REST) could take a look and let us know > if there is anything that we've missed. A large portion of that > document has already been implemented; we expect to release these > features in about a week as a part of Wrest 1.1. > > Thanks, > Sidu. > http://c42.in > >
Also, my colleagues and I implemented a client-side cache for Apache HttpComponents HttpClient, which was just included in the 4.1 GA release of HttpClient: http://www.apache.org/dist/httpcomponents/httpclient/RELEASE_NOTES.txt <http://mail-archives.apache.org/mod_mbox/www-announce/201101.mbox/%3C1295860829.2001.20.camel@ubuntu%3E>Jon ________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Erlend Hamnaberg [ngarthl@...] Sent: Tuesday, January 25, 2011 6:50 AM To: Sidu Ponnappa Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Feedback on implementing client side HTTP caching For inspiration check out my Open Source Java HTTP caching library: http://httpcache4j.codehaus.org the source is also available here: https://github.com/hamnis/httpcache4j -- Erlend On Tue, Jan 25, 2011 at 10:06 AM, Sidu Ponnappa <lorddaemon@...<mailto:lorddaemon@...>> wrote: Hello everyone, One of my colleagues has been working on implementing client side HTTP caching on a open source project of ours. He's had a hard time finding comprehensive documentation, so he put together https://github.com/kaiwren/wrest/blob/caching/Caching.markdown by working through the HTTP RFC as well as Firefox source. I'd appreciate it if those of you that know how this should work (especially in the context of REST) could take a look and let us know if there is anything that we've missed. A large portion of that document has already been implemented; we expect to release these features in about a week as a part of Wrest 1.1. Thanks, Sidu. http://c42.in
Any reason why a forward proxy cache would not work? For instance you could set up squid or traffic server as a forward proxy. Subbu On Jan 25, 2011, at 5:56 AM, Moore, Jonathan wrote: > > > Also, my colleagues and I implemented a client-side cache for Apache HttpComponents HttpClient, which was just included in the 4.1 GA release of HttpClient: > > http://www.apache.org/dist/httpcomponents/httpclient/RELEASE_NOTES.txt > > Jon > From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Erlend Hamnaberg [ngarthl@...] > Sent: Tuesday, January 25, 2011 6:50 AM > To: Sidu Ponnappa > Cc: rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Feedback on implementing client side HTTP caching > > > For inspiration check out my Open Source Java HTTP caching library: > http://httpcache4j.codehaus.org > > the source is also available here: > https://github.com/hamnis/httpcache4j > > -- > > Erlend > > > On Tue, Jan 25, 2011 at 10:06 AM, Sidu Ponnappa <lorddaemon@...> wrote: > > Hello everyone, > > One of my colleagues has been working on implementing client side HTTP > caching on a open source project of ours. He's had a hard time > finding comprehensive documentation, so he put together > https://github.com/kaiwren/wrest/blob/caching/Caching.markdown by > working through the HTTP RFC as well as Firefox source. > > I'd appreciate it if those of you that know how this should work > (especially in the context of REST) could take a look and let us know > if there is anything that we've missed. A large portion of that > document has already been implemented; we expect to release these > features in about a week as a part of Wrest 1.1. > > Thanks, > Sidu. > http://c42.in > > > > > >
httpcache4j illustrates the wrong way to do HTTP caching. Clients should not have to say things like HTTPCache cache = new HTTPCache( new MemoryCacheStorage(), HTTPClientResponseResolver.createMultithreadedInstance() ); response = cache.doCachedRequest(request); in their code. They would make regular HTTP requests (including appropriate headers), and a caching proxy can provide a cached response if available or forward the request to upstream servers if no cached response is found. In other words, the client would simply do response = request.doGet(); // pseudo code The point of the uniform interface is to let some intermediary along the network do the job. Subb On Jan 25, 2011, at 3:50 AM, Erlend Hamnaberg wrote: > > > For inspiration check out my Open Source Java HTTP caching library: > http://httpcache4j.codehaus.org > > the source is also available here: > https://github.com/hamnis/httpcache4j > > -- > > Erlend > > On Tue, Jan 25, 2011 at 10:06 AM, Sidu Ponnappa <lorddaemon@...> wrote: > > Hello everyone, > > One of my colleagues has been working on implementing client side HTTP > caching on a open source project of ours. He's had a hard time > finding comprehensive documentation, so he put together > https://github.com/kaiwren/wrest/blob/caching/Caching.markdown by > working through the HTTP RFC as well as Firefox source. > > I'd appreciate it if those of you that know how this should work > (especially in the context of REST) could take a look and let us know > if there is anything that we've missed. A large portion of that > document has already been implemented; we expect to release these > features in about a week as a part of Wrest 1.1. > > Thanks, > Sidu. > http://c42.in > > > > >
On Tue, Jan 25, 2011 at 8:56 PM, Subbu Allamaraju <subbu@...> wrote: > > > httpcache4j illustrates the wrong way to do HTTP caching. Clients should > not have to say things like > > HTTPCache cache = new HTTPCache( > new MemoryCacheStorage(), > HTTPClientResponseResolver.createMultithreadedInstance() > ); > response = cache.doCachedRequest(request); > > in their code. They would make regular HTTP requests (including appropriate > headers), and a caching proxy can provide a cached response if available or > forward the request to upstream servers if no cached response is found. In > other words, the client would simply do > > response = request.doGet(); // pseudo code > > The point of the uniform interface is to let some intermediary along the > network do the job. > Why not cache on the client side? In mobile applications where the network calls are expensive it makes sense to cache locally. -- David blog: http://www.traceback.org twitter: http://twitter.com/dstanek
Subbu Allamaraju wrote: > httpcache4j illustrates the wrong way to do HTTP caching. Clients should not have to say things like > > HTTPCache cache = new HTTPCache( > new MemoryCacheStorage(), > HTTPClientResponseResolver.createMultithreadedInstance() > ); > response = cache.doCachedRequest(request); > > in their code. They would make regular HTTP requests (including appropriate headers), and a caching proxy can provide a cached response if available or forward the request to upstream servers if no cached response is found. In other words, the client would simply do > > response = request.doGet(); // pseudo code > > The point of the uniform interface is to let some intermediary along the network do the job. Just want to check I'm reading you correctly, that client side caching should not be implemented, and one should instead lean on intermediaries along the network?
On Jan 25, 2011, at 6:40 PM, Nathan wrote: > Subbu Allamaraju wrote: >> httpcache4j illustrates the wrong way to do HTTP caching. Clients should not have to say things like >> HTTPCache cache = new HTTPCache( >> new MemoryCacheStorage(), >> HTTPClientResponseResolver.createMultithreadedInstance() >> ); >> response = cache.doCachedRequest(request); >> in their code. They would make regular HTTP requests (including appropriate headers), and a caching proxy can provide a cached response if available or forward the request to upstream servers if no cached response is found. In other words, the client would simply do >> response = request.doGet(); // pseudo code >> The point of the uniform interface is to let some intermediary along the network do the job. > > Just want to check I'm reading you correctly, that client side caching should not be implemented, and one should instead lean on intermediaries along the network? It is the latter. There are of course exceptions where the client - as an agent like a browser - does it own caching. On a broader note, getting caching right at scale is a non-trivial problem, and it is better to let a proxy cache deal with it. Subbu
Sure, a browser is another example. There is nothing to forbid a client from doing its own caching, but in general, it is better to delegate this to a proxy cache.. On Jan 25, 2011, at 6:03 PM, David Stanek wrote: > On Tue, Jan 25, 2011 at 8:56 PM, Subbu Allamaraju <subbu@...> wrote: > > httpcache4j illustrates the wrong way to do HTTP caching. Clients should not have to say things like > > HTTPCache cache = new HTTPCache( > new MemoryCacheStorage(), > HTTPClientResponseResolver.createMultithreadedInstance() > ); > response = cache.doCachedRequest(request); > > in their code. They would make regular HTTP requests (including appropriate headers), and a caching proxy can provide a cached response if available or forward the request to upstream servers if no cached response is found. In other words, the client would simply do > > response = request.doGet(); // pseudo code > > The point of the uniform interface is to let some intermediary along the network do the job. > > > Why not cache on the client side? In mobile applications where the network calls are expensive it makes sense to cache locally. > > > -- > David > blog: http://www.traceback.org > twitter: http://twitter.com/dstanek
Unsurprisingly I respectfully disagree that this is the wrong way. A client cache becomes just another intermediary along the way in the uniform interface. Why should a non-trivial REST application be treated differently than a browser? In an application that serves some hundred or more requests per minute, a browser/client cache shows significant less network traffic. It assumes of course that the client cache works the same as any other intermediary, which httpcache4j's case does. -- Erlend On Wed, Jan 26, 2011 at 8:01 AM, Subbu Allamaraju <subbu@...> wrote: > Sure, a browser is another example. There is nothing to forbid a client > from doing its own caching, but in general, it is better to delegate this to > a proxy cache.. > > On Jan 25, 2011, at 6:03 PM, David Stanek wrote: > > > On Tue, Jan 25, 2011 at 8:56 PM, Subbu Allamaraju <subbu@...> > wrote: > > > > httpcache4j illustrates the wrong way to do HTTP caching. Clients should > not have to say things like > > > > HTTPCache cache = new HTTPCache( > > new MemoryCacheStorage(), > > HTTPClientResponseResolver.createMultithreadedInstance() > > ); > > response = cache.doCachedRequest(request); > > > > in their code. They would make regular HTTP requests (including > appropriate headers), and a caching proxy can provide a cached response if > available or forward the request to upstream servers if no cached response > is found. In other words, the client would simply do > > > > response = request.doGet(); // pseudo code > > > > The point of the uniform interface is to let some intermediary along the > network do the job. > > > > > > Why not cache on the client side? In mobile applications where the > network calls are expensive it makes sense to cache locally. > > > > > > -- > > David > > blog: http://www.traceback.org > > twitter: http://twitter.com/dstanek > >
On Wed, Jan 26, 2011 at 8:01 AM, Subbu Allamaraju <subbu@...> wrote: > > > Sure, a browser is another example. There is nothing to forbid a client > from doing its own caching, but in general, it is better to delegate this to > a proxy cache.. > > HTTPCache4j is most useful in a client, but we've since used it in server-to-server communications too. We generally use HTTPCache4j to do all of our HTTP communication, using it more as a HTTP client rather than a specific HTTP cache; the cache is just there. And as long as the server doesn't provide any caching directives, HTTPCache4j doesn't provide any value. The real value is reaped when we (e.g. in production) need to add caching. Adding a real intermediary is not always feasible, and it's nice to know that all of our clients honour caching directives correctly. All we need to do to increase performance is to drop in a header or two in the server's responses, and just like that, client perceived performance increases many orders of magnitude. The only gripe I have with HTTPCache4j is the names of methods and so on. "doCachedRequest" ought to be called simply "request"... ;-) -- -mogsie-
Thank you, Nathan. I'll look into the new draft RFC. Erlend and Jonthan, thanks for the links to your respective projects - we'll take a look at those too. I haven't looked yet, but is there a suite of functional tests that exists that we can use to determine compliance? We're building one as we go, but if there's something that already exists that is comprehensive, it would be of great help. Thanks, Sidu. http://c42.in On Tue, Jan 25, 2011 at 5:20 PM, Erlend Hamnaberg <ngarthl@...> wrote: > For inspiration check out my Open Source Java HTTP caching library: > http://httpcache4j.codehaus.org > > the source is also available here: > https://github.com/hamnis/httpcache4j > > -- > > Erlend > > On Tue, Jan 25, 2011 at 10:06 AM, Sidu Ponnappa <lorddaemon@...>wrote: > >> >> >> Hello everyone, >> >> One of my colleagues has been working on implementing client side HTTP >> caching on a open source project of ours. He's had a hard time >> finding comprehensive documentation, so he put together >> https://github.com/kaiwren/wrest/blob/caching/Caching.markdown by >> working through the HTTP RFC as well as Firefox source. >> >> I'd appreciate it if those of you that know how this should work >> (especially in the context of REST) could take a look and let us know >> if there is anything that we've missed. A large portion of that >> document has already been implemented; we expect to release these >> features in about a week as a part of Wrest 1.1. >> >> Thanks, >> Sidu. >> http://c42.in >> >> > >
I think the crux here is that a caching component can/should be modeled as an intermediary, even if it is client-side i.e. implemented as an intermediary handling requests locally before they hit the wire. There should be no need for the requesting code to be concerned with caching because it's unnecessary - you don't make 'caching' XHR requests because normal XHR requests can receive locally cached responses from the browser cache, which is effectively acting as an intermediary. This is exactly what layering and caching are for. Saying that, I imagine using an 'off-the-shelf' forward _shared_ proxy locally may present some problems in terms of handling responses with the private directive. Are there options for squid and/or traffic to turn on a 'private' mode? Cheers, Mike On Wed, Jan 26, 2011 at 7:01 AM, Subbu Allamaraju <subbu@...> wrote: > Sure, a browser is another example. There is nothing to forbid a client > from doing its own caching, but in general, it is better to delegate this to > a proxy cache.. > > On Jan 25, 2011, at 6:03 PM, David Stanek wrote: > > > On Tue, Jan 25, 2011 at 8:56 PM, Subbu Allamaraju <subbu@...> > wrote: > > > > httpcache4j illustrates the wrong way to do HTTP caching. Clients should > not have to say things like > > > > HTTPCache cache = new HTTPCache( > > new MemoryCacheStorage(), > > HTTPClientResponseResolver.createMultithreadedInstance() > > ); > > response = cache.doCachedRequest(request); > > > > in their code. They would make regular HTTP requests (including > appropriate headers), and a caching proxy can provide a cached response if > available or forward the request to upstream servers if no cached response > is found. In other words, the client would simply do > > > > response = request.doGet(); // pseudo code > > > > The point of the uniform interface is to let some intermediary along the > network do the job. > > > > > > Why not cache on the client side? In mobile applications where the > network calls are expensive it makes sense to cache locally. > > > > > > -- > > David > > blog: http://www.traceback.org > > twitter: http://twitter.com/dstanek > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hi, yes, the caching module has a suite of unit tests to test compliance with the HTTP/1.1 spec. These are separated out into requirements (MUST/MUST NOT) and recommendations (SHOULD/SHOULD NOT). Here are links to them in the source repository: https://svn.apache.org/repos/asf/httpcomponents/httpclient/tags/4.1/httpclient-cache/src/test/java/org/apache/http/impl/client/cache/TestProtocolRequirements.java And https://svn.apache.org/repos/asf/httpcomponents/httpclient/tags/4.1/httpclient-cache/src/test/java/org/apache/http/impl/client/cache/TestProtocolRecommendations.java The caching module is implemented as a decorator around a regular HttpClient — meaning you instantiate the cache when you instantiate the client but can then "ignore" caching when you use it. I.e. You do: HttpClient regularBackend = new DefaultHttpClient(…); HttpClient theClientToUse = new CachingHttpClient(regularBackend, …); Jon ........ Jon Moore Comcast Interactive Media From: Sidu Ponnappa <lorddaemon@...<mailto:lorddaemon@...>> Date: Wed, 26 Jan 2011 14:01:03 +0530 To: <rest-discuss@yahoogroups.com<mailto:rest-discuss@yahoogroups.com>> Subject: Re: [rest-discuss] Feedback on implementing client side HTTP caching Thank you, Nathan. I'll look into the new draft RFC. Erlend and Jonthan, thanks for the links to your respective projects - we'll take a look at those too. I haven't looked yet, but is there a suite of functional tests that exists that we can use to determine compliance? We're building one as we go, but if there's something that already exists that is comprehensive, it would be of great help. Thanks, Sidu. http://c42.in On Tue, Jan 25, 2011 at 5:20 PM, Erlend Hamnaberg <ngarthl@gmail.com<mailto:ngarthl@...>> wrote: For inspiration check out my Open Source Java HTTP caching library: http://httpcache4j.codehaus.org the source is also available here: https://github.com/hamnis/httpcache4j -- Erlend On Tue, Jan 25, 2011 at 10:06 AM, Sidu Ponnappa <lorddaemon@...<mailto:lorddaemon@...>> wrote: Hello everyone, One of my colleagues has been working on implementing client side HTTP caching on a open source project of ours. He's had a hard time finding comprehensive documentation, so he put together https://github.com/kaiwren/wrest/blob/caching/Caching.markdown by working through the HTTP RFC as well as Firefox source. I'd appreciate it if those of you that know how this should work (especially in the context of REST) could take a look and let us know if there is anything that we've missed. A large portion of that document has already been implemented; we expect to release these features in about a week as a part of Wrest 1.1. Thanks, Sidu. http://c42.in
I think there's clearly a place for client-side caching — the moral equivalent of a browser cache, but for a general application. Even if all the calls are intra-datacenter, I might be better off having some cache entries stored in my application's address space for performance reasons. This can be coupled with a local forward proxy cache to implement layered caching (much as we have L1/L2 caches in CPUs). I think Subbu's point here wasn't that client-side caching was problematic, but that the part of the program issuing the requests ought not to have to behave differently in terms of programming interface due to the presence or absence of a client-side cache. The client-side cache ought to be something I can "drop in" or "wire in" without making real code changes — this takes advantage of the fact that HTTP caching is largely meant to be semantically transparent. Jon ........ Jon Moore Comcast Interactive Media From: David Stanek <dstanek@...<mailto:dstanek@...>> Date: Tue, 25 Jan 2011 21:03:16 -0500 To: Subbu Allamaraju <subbu@...<mailto:subbu@...>> Cc: Erlend Hamnaberg <ngarthl@...<mailto:ngarthl@...>>, Sidu Ponnappa <lorddaemon@gmail.com<mailto:lorddaemon@...>>, <rest-discuss@yahoogroups.com<mailto:rest-discuss@yahoogroups.com>> Subject: Re: [rest-discuss] Feedback on implementing client side HTTP caching On Tue, Jan 25, 2011 at 8:56 PM, Subbu Allamaraju <subbu@...<mailto:subbu@...>> wrote: httpcache4j illustrates the wrong way to do HTTP caching. Clients should not have to say things like HTTPCache cache = new HTTPCache( new MemoryCacheStorage(), HTTPClientResponseResolver.createMultithreadedInstance() ); response = cache.doCachedRequest(request); in their code. They would make regular HTTP requests (including appropriate headers), and a caching proxy can provide a cached response if available or forward the request to upstream servers if no cached response is found. In other words, the client would simply do response = request.doGet(); // pseudo code The point of the uniform interface is to let some intermediary along the network do the job. Why not cache on the client side? In mobile applications where the network calls are expensive it makes sense to cache locally. -- David blog: http://www.traceback.org twitter: http://twitter.com/dstanek
FWIW, adding client-side caching support is relatively simple on the Windows platform. Windows exposes base-level client-side caching services to the .NET runtime. Enabling it for any HTTP calls takes one line of code (on the initialized HttpWebRequest object): request.CachePolicy = new HttpRequestCachePolicy(HttpRequestCacheLevel.Default); This was pointed out rather nicely in REST In Practice[1]. In my view, HTTP Client applications SHOULD understand and honor caching directives. It MAY be possible to off-load this work to a proxy. Whether that proxy is internal or external will be a function of the programming environment, framework, etc available to the developer. [1] http://my.safaribooksonline.com/book/web-development/web-services/9781449383312/scaling-out/implementing_caching_in_net mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Wed, Jan 26, 2011 at 02:01, Subbu Allamaraju <subbu@...> wrote: > Sure, a browser is another example. There is nothing to forbid a client from doing its own caching, but in general, it is better to delegate this to a proxy cache.. > > On Jan 25, 2011, at 6:03 PM, David Stanek wrote: > >> On Tue, Jan 25, 2011 at 8:56 PM, Subbu Allamaraju <subbu@...> wrote: >> >> httpcache4j illustrates the wrong way to do HTTP caching. Clients should not have to say things like >> >> HTTPCache cache = new HTTPCache( >> new MemoryCacheStorage(), >> HTTPClientResponseResolver.createMultithreadedInstance() >> ); >> response = cache.doCachedRequest(request); >> >> in their code. They would make regular HTTP requests (including appropriate headers), and a caching proxy can provide a cached response if available or forward the request to upstream servers if no cached response is found. In other words, the client would simply do >> >> response = request.doGet(); // pseudo code >> >> The point of the uniform interface is to let some intermediary along the network do the job. >> >> >> Why not cache on the client side? In mobile applications where the network calls are expensive it makes sense to cache locally. >> >> >> -- >> David >> blog: http://www.traceback.org >> twitter: http://twitter.com/dstanek > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Subbu Allamaraju wrote: > On Jan 25, 2011, at 6:40 PM, Nathan wrote: > >> Subbu Allamaraju wrote: >>> httpcache4j illustrates the wrong way to do HTTP caching. Clients should not have to say things like >>> HTTPCache cache = new HTTPCache( >>> new MemoryCacheStorage(), >>> HTTPClientResponseResolver.createMultithreadedInstance() >>> ); >>> response = cache.doCachedRequest(request); >>> in their code. They would make regular HTTP requests (including appropriate headers), and a caching proxy can provide a cached response if available or forward the request to upstream servers if no cached response is found. In other words, the client would simply do >>> response = request.doGet(); // pseudo code >>> The point of the uniform interface is to let some intermediary along the network do the job. >> Just want to check I'm reading you correctly, that client side caching should not be implemented, and one should instead lean on intermediaries along the network? > > It is the latter. There are of course exceptions where the client - as an agent like a browser - does it own caching. In that case, sorry, but I have to disagree. The concept of client+cache is pretty fundamental to the REST style, and mentioned repeatedly throughout the dissertation, even a tertiary glance at the text and figures should make this more than clear. Similarly the HTTP specification(s) spend a significant amount of time discussing client / private caching, and catering for it (see p4-conditional and p6-cache). Hence why agents like the browsers, do their own caching. It's very good practise to cache at the edges, every edge, including client and server, and of course at intermediaries along the way; I personally could not bring myself to suggest to anybody that creating a client+cache is a bad idea, quite the opposite in fact, it's a wonderful idea. That said, it is of course wise to take the design principals of the REST style and apply them to the design of your components, both client and cache connectors should be hidden by an agent, exposing a nice interface to communicate - as Mike Kelly noted earlier with regards to XHR. I have to say that I can't find anything wrong with httpcache4j, it's a good example of a client cache, but one would be wise to wrap it up in another layer / interface which makes the cache and client connectors invisible to "the end user", but then httpcache4j is one of those connectors that you take, and wrap up inside your own agent, even if that agent is only intended to expose a nice programmable interface for application developers to use (again, like XHR). Best, Nathan
On Jan 26, 2011, at 8:30 AM, Nathan wrote: > In that case, sorry, but I have to disagree. Fair enough :) When considering operability and scalability, or when the client is running on a web server (like a front-end talking to some other service) I would not opt for client side (in particular in process) caching. Subbu
On Jan 26, 2011, at 1:38 AM, Mike Kelly wrote: > Saying that, I imagine using an 'off-the-shelf' forward _shared_ proxy locally may present some problems in terms of handling responses with the private directive. Are there options for squid and/or traffic to turn on a 'private' mode? I don't think either of them cache private responses by default. There may be plugins or mods to support that. Subbu
Subbu Allamaraju wrote: > On Jan 26, 2011, at 8:30 AM, Nathan wrote: > >> In that case, sorry, but I have to disagree. > > Fair enough :) > > When considering operability and scalability, or when the client is running on a web server (like a front-end talking to some other service) I would not opt for client side (in particular in process) caching. Depends very much on the use case for me, if I'm just using HTTP for RPC usage (interacting with some realtime service like a payment service, leaning heavily on POST) then there's little point (if there's nothing to cache!) - if however, you're interacting with the web at large (anything that involves GETting stuff from URIs) then caching will definitely be beneficial. Quite possibly this mail chain, like many REST related ones, is also suffering from massive overuse of the term 'client', using it to refer to an HTTP Client, a Client+Cache, an Agent, a Component, an Application and so forth. That could have some bearing on the messages being conveyed. Best, Nathan
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > "Philippe Mougin" wrote: > > > > Now, how would you do that in the context of HTTP? Returning a 400 > > was my first thought. > > > > If the server accepted the request, then no error occurred, so respond > 2xx. Only respond 4xx if the server can't process the request at all. > > Your question is appropriate to REST -- accepting and processing the > request, but responding 4xx, wouldn't be self-descriptive messaging any > more than it is to send "page not found" using 200 OK. Hi Eric, thanks for helping (as usual!) This is a sensible approach, that I'm applying now. As an added benefit it somewhat leads to conceptualize and document the application in a slightly different but more correct way, redefining what is considered as a bad or acceptable request for this interaction. In the response, it seems also sensible to include information about what was processed and what was not, and why, i.e., what made the server unable to accept some of the data in the request (this was suggested off list). Best, Philippe
On Jan 26, 2011, at 8:55 AM, Nathan wrote: > Subbu Allamaraju wrote: >> On Jan 26, 2011, at 8:30 AM, Nathan wrote: >>> In that case, sorry, but I have to disagree. >> Fair enough :) >> When considering operability and scalability, or when the client is running on a web server (like a front-end talking to some other service) I would not opt for client side (in particular in process) caching. Minor correction - I meant to say "in-process" and not "client-side". > > Depends very much on the use case for me, if I'm just using HTTP for RPC usage (interacting with some realtime service like a payment service, leaning heavily on POST) then there's little point (if there's nothing to cache!) - if however, you're interacting with the web at large (anything that involves GETting stuff from URIs) then caching will definitely be beneficial. Yes - client side proxy caching is fairly common. > > Quite possibly this mail chain, like many REST related ones, is also suffering from massive overuse of the term 'client', using it to refer to an HTTP Client, a Client+Cache, an Agent, a Component, an Application and so forth. That could have some bearing on the messages being conveyed. > > Best, > > Nathan
"Philippe Mougin" wrote: > > "Eric J. Bowman" wrote: > > > > "Philippe Mougin" wrote: > > > > > > Now, how would you do that in the context of HTTP? Returning a 400 > > > was my first thought. > > > > > > > If the server accepted the request, then no error occurred, so > > respond 2xx. Only respond 4xx if the server can't process the > > request at all. > > > > Your question is appropriate to REST -- accepting and processing the > > request, but responding 4xx, wouldn't be self-descriptive messaging > > any more than it is to send "page not found" using 200 OK. > > Hi Eric, thanks for helping (as usual!) > You're welcome. > > This is a sensible approach, that I'm applying now. > As an added benefit it somewhat leads to conceptualize and document > the application in a slightly different but more correct way, > redefining what is considered as a bad or acceptable request for this > interaction. > I've run into this myself, but it's a bit ticklish to solve. Assuming a system which accepts <em> but not <b> receives a request containing a <b>, it's straightforward to respond 409 with an entity which reflects the changes the server requires (<b> replaced with <em>) such that the user-agent can re-submit the request correctly (even automatically). Using an error response alerts the user-agent to the fact that a problem occurred; it's up to the user-agent how to handle user notification -- to approve the re-submit, or just alert that it happened, or do nothing, configurable or not. But, what to do if we want to accept the request, changing <b> to <em> without user approval? The 202 response seems appropriate at first, but it would be mis-used in this case, since it indicates that the process which changed <b> to <em> is pending. Although 202 with instructions to approve the pending changes, before actually changing the resource state, would be OK. So there are a couple of ways to keep the user-agent in the loop (so to speak) depending on whether you want that logic on the client side or the server side. But, if the system changes <b> to <em> and the user-agent is kept out of the loop, then it's just a 200/201 response. If interested, the user can compare the result with the request and see what changes were made, but HTTP makes no special provisions for this case as it isn't a messaging-between-connectors concern -- it's what servers do, all the time (consider a blog where you can't say ****). IOW, there is no HTTP 2xx code for "acceptable error". > > In the response, it seems also sensible to include information about > what was processed and what was not, and why, i.e., what made the > server unable to accept some of the data in the request. > Yes, that's the purpose of 202 and 409, but this is only sensible when there's some sort of process requiring the user-agent's involvement in confirming the change. If you're trying to save that round-trip, then any information of that sort in a 200 response doesn't make any sense, because the transaction has already completed. HTH. -Eric
Hello,
(it seems that my email from my mail client didn't find the way to the list - so this is another try)
I asked this question already on a platform that mainly deals with the other side of this topic (Linked Data/Semantic Web, see [1]). However, I thought, it might also be worth asking a REST community for clarifying this issue.
I know there are also already some discussions on this mailing list that are related somewhere, somehow to this topic (see [2,3,4,5] at least, especially in the recent past). Nevertheless, I didn't found any information resource that satisfiable (from my POV) covered the related topic.
So, here we go. Here are my findings and thoughts about the relation of Linked Data/Semantic Web to REST (cf. [1]):
How are
* the principles of Linked Data as data publishing guide (independent of Semantic Web technology) and
* the Semantic Web as common, standardized technology stack for machine-processable knowledge representation and management in the Web [6]
related to the
* the principles of REST as an architectural style for distributed hypermedia systems?
Let's take the constraints of the REST architectural style as figured out [7]:
* Resource Identification
* Uniform Interface
* Self-Describing Messages
* Hypermedia Driving Application State
* Stateless Interactions (see [8] for a good explanation what stateless in this context mean)
Resource Identification is clearly address in point 1 (URIs) and 2 (HTTP URIs) of the Linked Data principles as defined by timbl[9]. However, the explicit suggestion of the use of HTTP URIs is against the REST feature of a uniform generic interface between components ("A REST API should not be dependent on any single communication protocol", see [10]). Identification is separated from interaction.
Is the common, layered Semantic Web technology stack a implementation of a Uniform Interface re. REST principles? Or is it only HTTP as communication protocol? And what does "The same small set of operations applies to everything" then mean? Do I have to enable an processing of every operation on every (information) resource? Or does this mean, that I only have to provide a uniform behaviour of processing of operations on (information) resource, i.e. if a specific operation is not possible or allow on a specific resource, then the component has to communicate this in a uniform way?
A verification re. the issue of how the small set of operations of the Uniform Interface have to be support (on the implementation example of HTTP):
HTTP operations are generic: they are allowed or not, per resource, but they are always valid. (see [11])
This is in accord with my last statement.
As the common, layered Semantic Web technology stack uses HTTP as communication protocol, it uniformly defines/provides also the small set of operations of the Uniform Interface. However, the media types define processing models ("Every media type defines a default processing model.", see [12]). Thereby, layered encoding is possible (see [13]) e.g., "application/rdf+turtle": RDF Model as knowledge representation structure (data model) and Turtle as syntax (other knowledge representation languages, i.e. RDF Schema are provided "in-band", via namespace references). Furthermore,
The media type identifies a specification that defines how a representation is to be processed. (see [14])
Side note: I know, there is some progress in providing media type specification as resources with a URI. However, as far as I know, there their resource URIs lack of a good machine-processable, dereferencable specification description, i.e. the lack of a machine-processable HTML specification that enables a machine agent to know that "the anchor elements with an href attribute create a hypertext link that, when selected, invokes a retrieval request (GET)" (this issue is derived from community statement and is not really verified, however, I currently would agree with it ;) ; please correct me, if this assertion is wrong). All in all, an agent must be able to automatically learn the processing model of a previously unknown media type, if wanted (analogues the HTTP Upgrade header field). I know, that there is also some progress (discussion) in the TAG community re. a better introduction of new media types.
To summarize, the important aspect is that the media type specifications and the knowledge representation language specifications in general have to the define the processing model of specific link types (e.g. href in HTML) also in a machine-processable way (is this currently really the case? - I would say no!). This is addressed by the constraints "Self-Describing Messages" and "Hypermedia Driving Application State" (a.k.a. HATEOAS).
That means, I would (currently) conclude that only the methods of the HTTP protocol are an implementation of a set of operations of a Uniform Interface and Semantic Web knowledge representation languages are related to the other two constraints.
Self-Describing Messages are enforced for machine processing by using as basis the common knowledge representation languages of the Semantic Web (i.e. RDF Model, RDF Schema, OWL, RIF) and all knowledge representation languages (incl. further Semantic Web ontologies) are referenced in this 'message'. This is somehow generalized in the third Linked Data principles as defined by timbl[9] ("provide useful information, using the standards").
The forth Linked Data principle as defined by timbl[9] ("Include links to other URIs ") is somehow related to Hypermedia Driven Application State of the REST principles. This principle can again be powered for better machine processing by using the common knowledge representation languages of the Semantic Web as basis. However, I'm a bit unclear how the links drive my application state. Although, I guess that the application state would change when navigating to a resource by dereferencing a link (HTTP URI).
This is explained in the introduction section of Principled Design of the Modern Web Architecture[13]:
The name "Representational State Transfer" is intended to evoke an image of how a well-designed Web application behaves: a network of Web pages forms a virtual state machine allowing a user to progress through the application by selecting a link or submitting a short data-entry form, with each action resulting in a transition to the next state of the application by transferring a representation of that state to the user.
Stateless Interaction is not really covered by the Linked Data principles as defined by timbl[9], or? Although, when realizing "state as a resource" (cf. [15]), I can use again the common knowledge representation languages of the Semantic Web as basis for describing states and using HTTP URIs to make these resources also accessible.
Would you agree with (parts of) my interpretation?
Finally, are the principles of Linked Data really only intended to be read-only. I though read and write would better fit to the principles of REST, or?
Source, where this topic is also discussed somehow:
* Linked Data for RESTafarians [16]
* Linked Data and REST architectural style [2]
* RESTful Design Patterns, httpRange-14 & Linked Data [17]
* Principled Design of the Modern Web Architecture [13]
Thanks a lot for all your efforts in participating in this discussion and clarifying this issue.
Cheers,
Bob
[1] http://www.semanticoverflow.com/questions/2763/the-relation-of-linked-data-semantic-web-to-rest
[2] http://tech.groups.yahoo.com/group/rest-discuss/message/12181
[3] http://tech.groups.yahoo.com/group/rest-discuss/message/17057
[4] http://tech.groups.yahoo.com/group/rest-discuss/message/16403
[5] http://tech.groups.yahoo.com/group/rest-discuss/message/16971
[6] http://smiy.wordpress.com/2011/01/10/the-common-layered-semantic-web-technology-stack/
[7] http://dret.net/netdret/docs/soa-rest-www2009/rest#%2811%29
[8] https://www.blogger.com/comment.g?blogID=6432484966567158349&postID=3673658851645058486
[9] http://www.w3.org/DesignIssues/LinkedData.html
[10] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
[11] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-732
[12] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-730
[13] http://www.ics.uci.edu/~taylor/documents/2002-REST-TOIT.pdf
[14] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-754
[15] http://dret.net/netdret/docs/soa-rest-www2009/rest#%2843%29
[16] http://webofdata.wordpress.com/2009/10/09/linked-data-for-restafarians/
[17] http://efoundations.typepad.com/efoundations/2009/07/restful-design-patterns-httprange14.html
On Mon, Jan 31, 2011 at 8:00 AM, Reza Lesmana <lesmana.reza@...>wrote: > You might understand a little bit more if I say that application change > from the perspective of a user who navigates through the application change, > it means almost the same as running the business process of the system that > the user needs to achieve the goal. I quote from Jim Webber, et al, REST In > Practice : > > Consumers in a hypermedia system cause state transitions by visiting and > manipulating resource state. Interestingly, the application state changes > that result from a consumer driving a hypermedia system resemble the > execution of a business process. This suggests that our services can > advertise workflows using hypermedia. > > > But, be careful in choosing the mediatype for embedding hypermedia in it. > According to Mike Amundsen, XML cannot be used as a mediatype for > hypermedia, since XML doesn't have native hyper-linking semantics ( > http://www.amundsen.com/hypermedia/ ). Atom does, so it's good for > embedding hypermedia. > >> However, I'm a bit unclear how the links drive my application state. >> Although, I guess that the application state would change when navigating to >> a resource by dereferencing a link (HTTP URI). >> This is explained in the introduction section of Principled Design of the >> Modern Web Architecture[13]: >> >> >> >> > >
"Bob Ferris" wrote:
>
> Resource Identification is clearly address in point 1 (URIs) and 2
> (HTTP URIs) of the Linked Data principles as defined by timbl[9].
> However, the explicit suggestion of the use of HTTP URIs is against
> the REST feature of a uniform generic interface between components
> ("A REST API should not be dependent on any single communication
> protocol", see [10]). Identification is separated from interaction.
>
HTTP URIs aren't dependent on a single communication protocol; both
WebSockets and Waka are expected to re-use HTTP URIs. I assure you
REST doesn't literally say, "don't use HTTP URIs." It's a finer point.
-Eric
"Bob Ferris" wrote:
>
> Is the common, layered Semantic Web technology stack a implementation
> of a Uniform Interface re. REST principles?
>
No, they are unrelated. REST is primarily concerned with the semantics
of the messaging between network connectors. All REST cares about the
data type, is that it be standardized and capable of the progressive
rendering (processable as a stream) of choices for transitioning to
other application states. SemWeb is all about the semantics of the
payload, as interpreted by the components, which has no bearing on
messaging between connectors (IOW, how to tell a machine how a
particular state transition relates to its goal).
>
> Or is it only HTTP as communication protocol?
>
Simply stated, you can GET or PUT your RDF documents with FTP, too.
>
> And what does "The same small set of operations applies to
> everything" then mean? Do I have to enable an processing of every
> operation on every (information) resource?
>
No. What's uniform isn't the set of methods allowed by all resources;
it's the semantics of whatever methods a given resource allows. You
don't use GET to delete stuff, or POST to retrieve stuff, or FOO to
denote nonstandardized semantics.
>
> HTTP operations are generic: they are allowed or not, per
> resource, but they are always valid. (see [11])
>
> This is in accord with my last statement.
>
Any HTTP resource has a DELETE method which removes the resource,
whether it's allowed (for the requesting user) or not, whether the
method is implemented (501 response) or not, which never means anything
but remove the resource.
>
> As the common, layered Semantic Web technology stack uses HTTP as
> communication protocol, it uniformly defines/provides also the small
> set of operations of the Uniform Interface.
>
I don't understand. HTML and Atom define HTTP method operations
(beyond the default retrieval operation), SemWeb doesn't.
>
> Side note: I know, there is some progress in providing media type
> specification as resources with a URI.
>
No, there isn't. Yes, it's been raised repeatedly, and shot down
repeatedly, here and elsewhere. There is an effort underway to
identify and resolve problems with media types and their registration:
http://tools.ietf.org/html/draft-masinter-mime-web-info
I don't recall if it's in the document or the surrounding debate, but
this notion of allowing URIs as tokens in Content-Type has been
explicitly rejected many times now as poor architectural design. Most
recently, this reality was summed up earlier today in seven words:
http://lists.w3.org/Archives/Public/www-tag/2011Jan/0058.html
Registries are oversight mechanisms. It's non-sequitir to suggest
replacing registries with URIs, without first explaining why oversight
isn't required. Otherwise the result is registered URIs, so the
"problem" isn't actually solved, only compounded by more bits in the
header and reduced visibility.
>
> All in all, an agent must be able to automatically learn the
> processing model of a previously unknown media type, if wanted
> (analogues the HTTP Upgrade header field).
>
I have no idea what you're talking about, what purpose this serves,
what need it fills, what it has to do with architecture, or why the
subject of machines learning media type processing models keeps coming
up, particularly since the level of AI sophistication required for a
machine, with no advance knowledge of either, to somehow learn how to
render HTML or SVG or anything else is decades away, IBM's Jeopardy-
playing machine notwithstanding.
Media types are meant to explain the processing model of a data type to
humans, who then write code implementing that processing model. Media
types are not targeted towards machine readability, an architectural
requirement I can't begin to fathom when mentioned as a deficiency.
The scope of the effort to fix the registry does not include machine-
readable media types.
>
> This is addressed by the constraints "Self-Describing Messages" and
> "Hypermedia Driving Application State" (a.k.a. HATEOAS).
>
The constraint is "self-descriptive messaging" and has nothing to do
with the self-describing messages (payloads) of SemWeb. All a
component needs to be able to do, when it comes to Content-Type, is
recognize the token to determine if it has the corresponding codec.
The component takes that token to be self-descriptive of the sender's
intended processing model, thus hopefully rendering HTML as plaintext
if the representation was labelled as text/plain.
That REST has anything to do with machines which generate that codec
based on inferring something from a URI in Content-Type, or that the
payload itself must be self-describing (self-describing is a SemWeb
term having to do with extensible document semantics, self-descriptive
is a REST term having to do with registered header semantics), are
pervasive myths whose origins baffle me.
>
> That means, I would (currently) conclude that only the methods of the
> HTTP protocol are an implementation of a set of operations of a
> Uniform Interface and Semantic Web knowledge representation languages
> are related to the other two constraints.
>
You're drawing the wrong conclusion. REST's hypertext constraint
requires that instructions for how to proceed to other application
states be included in the payload, that's all. SemWeb operates at a
layer above this, concerned with making those hypertext instructions
machine-readable (self-describing).
It's possible to make a RESTful SemWeb application, but not required;
just as it's possible to add SemWeb to a REST system, or not -- the
architectural style of a system is not a product of using SemWeb
technologies. SemWeb is component architecture, REST is connector
architecture.
>
> Self-Describing Messages are enforced for machine processing by using
> as basis the common knowledge representation languages of the
> Semantic Web (i.e. RDF Model, RDF Schema, OWL, RIF) and all knowledge
> representation languages (incl. further Semantic Web ontologies) are
> referenced in this 'message'. This is somehow generalized in the
> third Linked Data principles as defined by timbl[9] ("provide useful
> information, using the standards").
>
I can't stress enough, not to confuse self-describing messages with
self-descriptive messaging. Tim's principles apply for the wide-open
semantics of describing anything, as a payload. Roy's principles apply
to the limited set of uniformly-understood semantics of messaging (i.e.
HTTP headers) which may be extended across network boundaries.
-Eric
One anecdotal thing I've found in my reading of Roy's REST dissertation and some of the Semantic Web documentation concerns the nature of resources. Compare: """ The key abstraction of information in REST is a resource. Any information that can be named can be a resource: a document or image, a temporal service (e.g. "today's weather in Los Angeles"), a collection of other resources, a non-virtual object (e.g. a person), and so on. In other words, any concept that might be the target of an author's hypertext reference must fit within the definition of a resource. A resource is a conceptual mapping to a set of entities, not the entity that corresponds to the mapping at any particular point in time. """ [1] with """ It is important to understand that using URIs, it is possible to identify both a thing (which may exist outside of the Web) and a Web document describing the thing. For example the person Alice is described on her homepage. Bob may not like the look of the homepage, but fancy the person Alice. So two URIs are needed, one for Alice, one for the homepage or a RDF document describing Alice. The question is where to draw the line between the case where either is possible and the case where only descriptions are available. """ [2] I guess the closing of httpRange-14 by the TAG was a resolution to this difference [3]. But I still find the REST style, where URIs unambiguously identify Resources and not Representations fits my brain better. W3C's Cool URIs for the Semantic Web splits the world of resources into Documents [4] and Real World Objects [5]. This sounds ok-ish in theory, but it really hasn't worked out all that well in the practice of web publishing [6]. The key problem for me is that when semweb folks talk about "documents" I can't help but hear "representations". For example, when I'm publishing something on the web with a web framework like RubyOnRails or whatever, I have a model of something, say a User, and a URL route like /user/:id, and some controller code that goes and fetches that model instance and delivers up some HTML for the user using a template. Every time I do something like this it just isn't feasible for me to think, hmmm is this URL identifying a real world object? Is the database record for the User an Information Resource? Or is my database record about a Real World Thing or a Document? Should I really have two URL paths here, one for the Document about the User and one for the User themself? Should I use a # in that URI, or use the 303 redirect to indicate it is the Real World Object? YAGNI I've found it easier to look at the World Wide Web through REST colored glasses where my URLs identify Resources, and my Server delivers up Representations of them. And yet, RDF remains a nice data model (with a few decent Representations) for describing web graphs, and it has rdf:type for explicitly documenting the nature of the Resource. //Ed [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1_1 [2] http://www.w3.org/TR/cooluris/#distinguishing [3] http://www.w3.org/DesignIssues/HTTP-URI2 [4] http://www.w3.org/TR/cooluris/#oldweb [5] http://www.w3.org/TR/webarch/#id-resources [6] http://www.w3.org/TR/cooluris/#semweb
Hi Bob, Generally they are orthogonal, in the same way that a logic statement is orthogonal to a web server. REST is an architectural style and set of constraints one considers and applies when webizing a technology and the components related to the use of that technology. Linked Data is webized information, each datum of information is a logical statement expressed as a typed link, a typed link is a statement which expresses the relation between two things, each thing can be a literal (a string, a number, denoting itself), an unnamed thing that merely exists (a blank node), or a globally named thing (a named node, a logical constant). Each statement can be considered true or false, and when you combine all the statements which you consider true about a thing, that set of statements forms a description of the thing (which can also be considered your current belief state, what you believe to be true about the thing(s) you're considering). As Linked Data is webized, it uses URIs as names for things and relations, and encourages the exposing of statements about those things or relations when you lookup (dereference) the corresponding URIs. Thus one would apply the style and constraints of REST not to Linked Data, but to the components, the publishing of Linked Data, the retrieval of Linked Data over a network, and to the construction of Agents which interact with the web of linked data. Conversely, since REST is a set of constraints optimized for the "good old fashioned web of documents", one can review the needs and usage of Linked Data to come up with additional constraints, to identify mismatches, or to create an entirely new architectural style and set of constraints which one would then apply to the web specifications (URI, HTTP etc) in order to optimize it. Such examples may include: - the tight coupling of URIs to protocols in the common case, for example one may need PUT, POST, DELETE to operate through HTTP+TLS whilst GET is through basic HTTP. - REST and web optimization for large-grain hypermedia transfer whilst linked data deployment typically needs fine-grained data transfer / huge-grain data-set transfer. - Analysis and inclusion of composite application state and single client multi-server interactions. - The missing reverse-links. - Pipelined requests and async message transfer (allowing for multiple requests to be sent at once and sent back in an optimized fashion). Essentially, REST is over a decade old, and whilst very good, it comprises constraints and a style which was optimized for common use cases which are no longer as common, Linked Data, Client-side "ajax" applications, cloud storage and cloud computing, version control over HTTP and many other now common uses are simply uncatered for, and the REST style is not optimized for these cases. The only other relation between REST and Linked Data, is that they both use HTTP URIs to identify resources, the web specification and REST definitions of the term "resource" is so abstractly defined that this often leads to confusion and inconsistent naming of "resources", hence httpRange-14 and many other issues. Best, Nathan
Hi Bob, Glad you've brough this up, and I reckon you've pretty well scoped the domain. Generally I'd say that Linked Data/Semantic Web technologies do fit well with the REST model. The separation between resources and their representations allow resources that correspond to non-document things (concepts, real-world objects etc) to have HTTP-friendly representations. (having said that, http-range-14 is a bit of a rathole, but the TAG's resolution with 303s etc seems Good Enough). So this part I suggest could be called "transparent", there aren't really any interop obstacles on the horizon from the perspectives of REST, HTTP or Semantic Web/Linked Data. But your following comments do raise some very interesting issues. > [...] Hypermedia Driven Application State of the REST principles. This principle can again be powered for better machine processing by using the common knowledge representation languages of the Semantic Web as basis. Ok, agreed. By generalizing from documents to /anything/, the resources and their relationships (via their representations), are potentially a lot more open to direct machine processing. > However, I'm a bit unclear how the links drive my application state. Although, I guess that the application state would change when navigating to a resource by dereferencing a link (HTTP URI). > This is explained in the introduction section of Principled Design of the Modern Web Architecture[13]: > > The name "Representational State Transfer" is intended to evoke an image of how a well-designed Web application behaves: a network of Web pages forms a virtual state machine allowing a user to progress through the application by selecting a link or submitting a short data-entry form, with each action resulting in a transition to the next state of the application by transferring a representation of that state to the user. Right, that's my understanding too. With most current applications the state that's needed is fairly trivial. The primary Web application, the browser, only usually deals with a handful of resources at a time. Even with things like proxies, aggregators, spider-based agents etc, the interactions are individually very simple, it's just there are a lot more of them going on concurrently. On the other hand I suspect SW/LD applications are likely to be a lot more demanding in terms of complexity of state. A representation of a resource (plus HTTP headers) can contain a lot more resource-related machine-usable information than an opaque blob or even a HTML document with hrefs. The water around Web applications has been muddied somewhat by systems that rely on server-side state and coupling to the client: "I can't do that without cookies", and RPC still pops up here and there, however disguised. (Aside: I'd be interested in pointers to material that walks the reader from this mindset to something more RESTful). But with SW/LD apps we kind-of have a chance to start from year zero and do things *right*. The links you provided and the Linked Data Patterns book [1] are good steps in this direction (as hopefully so so too the Linked Data book [2]). > Stateless Interaction is not really covered by the Linked Data principles as defined by timbl[9], or? Although, when realizing "state as a resource" (cf. [15]), I can use again the common knowledge representation languages of the Semantic Web as basis for describing states and using HTTP URIs to make these resources also accessible. Yes - although the "transparency" here is not so obvious. > Would you agree with (parts of) my interpretation? Generally, yes. > Finally, are the principles of Linked Data really only intended to be read-only. I though read and write would better fit to the principles of REST, or? A very good point. Read, write and the other operations covered by HTTP methods should ideally be a core part of LD/SW systems from the get-go (otherwise things will be a bit boring!). It has taken the Web quite a while to get more read/write despite the capability being there from the start. CMSs and blogging tools, things like Twitter and Facebook do help extend the media from broadcast to peer (few publishers/many consumers -> many publishers/many consumers). Even within these useful systems there are various Web anti-patterns, most notably the closed data silo as opposed to linked open data (LOD, [3]). (heh, speaking of which I might well repost this mail on my blog, or at least tweet it...) Hopefully, thanks to the explosion in publication of RDFa - 3.6% of webpages now contain RDFa [4] - we will soon be seeing a lot more diverse use made of the technologies from folks that aren't coming straight from the traditional (incomprehensible ;) SW/LD communities. There may well also be some more useful infrastructure subsystem bits (akin to those described by Fielding in his thesis) enabled by RESTful HTTP + semweb tech. For example, an RDF store (server- or client-side) is essentially a cache of a little chunk of the Web of Data. A SPARQL endpoint offers an efficient mode of access to that cache. Going one step further, a generic intelligent linked data proxy could be composed this way. The current generation of mainstream RDF/SW/LD applications seem to be mostly limited either catalog-like systems (e.g. the drug DBs and the BBC) or the augmentation of search (e.g. Yahoo! SearchMonkey and Google Rich Snippets). Useful those these may be, they aren't exactly inspiring. Going off-topic a little, I reckon the parts needed for a really useful, compelling and *interesting* Web of data include: * improved linkage between data sets, to facilitate easier discovery * refactoring of traditional data-based activities to use (just-in-time) linked data * new user interface paradigms to work with linked data * a lot more imagination! I reckon there's easily enough momentum to make progress down this track inevitable - the question is what do we need to do to lubricate the wheels and move a bit faster. Cheers, Danny. [1] http://patterns.dataincubator.org/book/ [2] http://tomheath.com/blog/2011/01/the-linked-data-book-draft-table-of-contents/ [3] http://esw.w3.org/SweoIG/TaskForces/CommunityProjects/LinkingOpenData [4] http://tripletalk.wordpress.com/2011/01/25/rdfa-deployment-across-the-web/
Bob, there's already quite a few of the semantics crowd working explicitly in this area. Reto Krummenacher, who I worked with in Innsbruck, and I had a tutorial on 'Linked Open Services <http://www.linkedopenservices.org/blog>' at ISWC last year. There's work at Karlsruhe (where I moved) on 'Linked Data Services', looking closely into the interlinkage of service results as well as 'query planning' over RDF-based retrieval operations. Also there's work (from my former colleagues) at the OU who originally coined the term 'Linked Services' - they're particularly concerned with exposing service descriptions as Linked Data (indeed you'll see their iServe tool as one of the bubbles in the latest Linking Open Data Cloud). As you observe, most of the best practice around Linked Data is simply about retrieval. Even with updates in SPARQL 1.1, and its bindings (and some other read/write approaches to Linked Data) there's very often a naive notion of pushing complete RDF descriptions of resources / named graphs, and little accomodation of computation and side effects resulting from interactions, which one expects with services. In Linked Open Services we try to bring more of REST together with Linked Data and have many (RDF-exchanging) services that create new addressable REST resources, i.e. that can be, say, DELETEd but also POSTed to individually. I'd say two important questions with RDF-based communications are: (as you said) how to use RDF in the equivalent/supplementary role as HATEOS (to make the future state and interactions 'machine processable' in an RDF/inference-driven way); (additionally) how to describe messages expected in a Linked Data compatible way. To the latter question the Linked Open Services and Linked Data Services work proved almost exactly coincident in both picking SPARQL graph patterns. Very much like the static data argument for flexibility (extensible graph constraints versus fixed - difficult to compatibly extend, difficult to reuse without planning - schemas), we find that this is a very intuitive way to describe what the service would like you to submit, and what you can expect to receive back (in RDF form). I've CCed the Google Group we'd all started together, but not yet advertised, hoping that some of the other guys will jump in here. Barry On 31/01/2011 13:14, Danny wrote: > > Hi Bob, > > Glad you've brough this up, and I reckon you've pretty well scoped the > domain. > > Generally I'd say that Linked Data/Semantic Web technologies do fit > well with the REST model. The separation between resources and their > representations allow resources that correspond to non-document things > (concepts, real-world objects etc) to have HTTP-friendly > representations. (having said that, http-range-14 is a bit of a > rathole, but the TAG's resolution with 303s etc seems Good Enough). > > So this part I suggest could be called "transparent", there aren't > really any interop obstacles on the horizon from the perspectives of > REST, HTTP or Semantic Web/Linked Data. > > But your following comments do raise some very interesting issues. > > > [...] Hypermedia Driven Application State of the REST principles. > This principle can again be powered for better machine processing by > using the common knowledge representation languages of the Semantic > Web as basis. > > Ok, agreed. By generalizing from documents to /anything/, the > resources and their relationships (via their representations), are > potentially a lot more open to direct machine processing. > > > However, I'm a bit unclear how the links drive my application state. > Although, I guess that the application state would change when > navigating to a resource by dereferencing a link (HTTP URI). > > This is explained in the introduction section of Principled Design > of the Modern Web Architecture[13]: > > > > The name "Representational State Transfer" is intended to evoke an > image of how a well-designed Web application behaves: a network of Web > pages forms a virtual state machine allowing a user to progress > through the application by selecting a link or submitting a short > data-entry form, with each action resulting in a transition to the > next state of the application by transferring a representation of that > state to the user. > > Right, that's my understanding too. > With most current applications the state that's needed is fairly > trivial. The primary Web application, the browser, only usually deals > with a handful of resources at a time. Even with things like proxies, > aggregators, spider-based agents etc, the interactions are > individually very simple, it's just there are a lot more of them going > on concurrently. > > On the other hand I suspect SW/LD applications are likely to be a lot > more demanding in terms of complexity of state. A representation of a > resource (plus HTTP headers) can contain a lot more resource-related > machine-usable information than an opaque blob or even a HTML document > with hrefs. > > The water around Web applications has been muddied somewhat by systems > that rely on server-side state and coupling to the client: "I can't do > that without cookies", and RPC still pops up here and there, however > disguised. > (Aside: I'd be interested in pointers to material that walks the > reader from this mindset to something more RESTful). > > But with SW/LD apps we kind-of have a chance to start from year zero > and do things *right*. The links you provided and the Linked Data > Patterns book [1] are good steps in this direction (as hopefully so so > too the Linked Data book [2]). > > > Stateless Interaction is not really covered by the Linked Data > principles as defined by timbl[9], or? Although, when realizing "state > as a resource" (cf. [15]), I can use again the common knowledge > representation languages of the Semantic Web as basis for describing > states and using HTTP URIs to make these resources also accessible. > > Yes - although the "transparency" here is not so obvious. > > > Would you agree with (parts of) my interpretation? > > Generally, yes. > > > Finally, are the principles of Linked Data really only intended to > be read-only. I though read and write would better fit to the > principles of REST, or? > > A very good point. Read, write and the other operations covered by > HTTP methods should ideally be a core part of LD/SW systems from the > get-go (otherwise things will be a bit boring!). > > It has taken the Web quite a while to get more read/write despite the > capability being there from the start. CMSs and blogging tools, things > like Twitter and Facebook do help extend the media from broadcast to > peer (few publishers/many consumers -> many publishers/many > consumers). Even within these useful systems there are various Web > anti-patterns, most notably the closed data silo as opposed to linked > open data (LOD, [3]). > > (heh, speaking of which I might well repost this mail on my blog, or > at least tweet it...) > > Hopefully, thanks to the explosion in publication of RDFa - 3.6% of > webpages now contain RDFa [4] - we will soon be seeing a lot more > diverse use made of the technologies from folks that aren't coming > straight from the traditional (incomprehensible ;) SW/LD communities. > > There may well also be some more useful infrastructure subsystem bits > (akin to those described by Fielding in his thesis) enabled by RESTful > HTTP + semweb tech. For example, an RDF store (server- or client-side) > is essentially a cache of a little chunk of the Web of Data. A SPARQL > endpoint offers an efficient mode of access to that cache. Going one > step further, a generic intelligent linked data proxy could be > composed this way. > > The current generation of mainstream RDF/SW/LD applications seem to be > mostly limited either catalog-like systems (e.g. the drug DBs and the > BBC) or the augmentation of search (e.g. Yahoo! SearchMonkey and > Google Rich Snippets). Useful those these may be, they aren't exactly > inspiring. > > Going off-topic a little, I reckon the parts needed for a really > useful, compelling and *interesting* Web of data include: > > * improved linkage between data sets, to facilitate easier discovery > * refactoring of traditional data-based activities to use > (just-in-time) linked data > * new user interface paradigms to work with linked data > * a lot more imagination! > > I reckon there's easily enough momentum to make progress down this > track inevitable - the question is what do we need to do to lubricate > the wheels and move a bit faster. > > Cheers, > Danny. > > [1] http://patterns.dataincubator.org/book/ > [2] > http://tomheath.com/blog/2011/01/the-linked-data-book-draft-table-of-contents/ > [3] http://esw.w3.org/SweoIG/TaskForces/CommunityProjects/LinkingOpenData > [4] > http://tripletalk.wordpress.com/2011/01/25/rdfa-deployment-across-the-web/ > >
On Mon, Jan 31, 2011 at 6:08 AM, Nathan <nathan@...> wrote: > Essentially, REST is over a decade old, and whilst very good, it comprises > constraints and a style which was optimized for common use cases which are > no longer as common, Linked Data, Client-side "ajax" applications, cloud > storage and cloud computing, version control over HTTP and many other now > common uses are simply uncatered for, and the REST style is not optimized > for these cases. REST's Code-on-Demand constraint [1] caters for client-side AJAX just fine. And cloud storage services like Amazon S3 are nice examples of RESTful service. Likewise we've seen some nice work in the version control space [2] that fits well w/ REST's use of Hypermedia as the Engine of Application State constraint. One could venture to say that the success of these technologies on the Web has largely been the result of the thought that has gone into REST. //Ed [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_7 [2] http://tools.ietf.org/html/rfc5829 [3] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_3_3
On Mon, Jan 31, 2011 at 7:14 AM, Danny <danny.ayers@...> wrote: > But with SW/LD apps we kind-of have a chance to start from year zero and do things *right*. The links you provided and the Linked Data Patterns book [1] are good steps in this direction (as hopefully so so too the Linked Data book [2]). While it can be exciting to think about starting over, the reality is that the Web is deployed and working at scales that would've been hard to imagine in the early days of the WWW. The work done on REST to describe the set of constraints that have allowed this growth to occur count as doing things "right" in my book. That isn't to say REST is finished, or that the RDF data model doesn't have something to offer. But it would be nice to see more conscious alignment with REST from the Linked Data crowd, in particular around the issue of what URLs identify, and (for me) unhelpful terminology like Document and Information Resource. //Ed
Ed Summers wrote: > On Mon, Jan 31, 2011 at 6:08 AM, Nathan <nathan@...> wrote: >> Essentially, REST is over a decade old, and whilst very good, it comprises >> constraints and a style which was optimized for common use cases which are >> no longer as common, Linked Data, Client-side "ajax" applications, cloud >> storage and cloud computing, version control over HTTP and many other now >> common uses are simply uncatered for, and the REST style is not optimized >> for these cases. > > REST's Code-on-Demand constraint [1] caters for client-side AJAX just > fine. And cloud storage services like Amazon S3 are nice examples of > RESTful service. Likewise we've seen some nice work in the version > control space [2] that fits well w/ REST's use of Hypermedia as the > Engine of Application State constraint. One could venture to say that > the success of these technologies on the Web has largely been the > result of the thought that has gone into REST. Indeed, I didn't intend to discredit the importance of REST, rather position it as something to built on, a subset of what's needed for full web architecture and network / component / application considerations. There is much scope wrt the CoD constraint, and much usage that is still undocumented / not understood thoroughly. Likewise much discussion of applying REST to the cloud storage APIs, for instance S3 is more RPC over HTTP than REST, and REST is what you would apply to the various cloud APIs in order to standardize them / make them fully interoperable in a RESTful way. As for version control I was thinking more along the lines of git/hg/svn over http, however rfc5829 is a good example. Out of interest have you read the dissonance chapters (2&3) or CREST [1] ? [1] http://www.erenkrantz.com/CREST/ Best, Nathan
On Mon, Jan 31, 2011 at 8:48 AM, Nathan <nathan@...> wrote: > Likewise much discussion of applying REST to the cloud storage APIs, for > instance S3 is more RPC over HTTP than REST, I'm not a REST expert by any stretch of the imagination (just a web developer) but S3 [1] has always seemed particularly RESTful to me. I don't particularly want to get enmeshed in a debate about whether S3 is RESTful or not, when we were actually talking about the Semantic Web (Linked Data) and REST. > Out of interest have you read the dissonance chapters (2&3) or CREST [1] ? I haven't had a chance to yet. It's been on the to-read pile for a while, thanks for the reminder to pop it off the stack. //Ed [1] http://docs.amazonwebservices.com/AmazonS3/2006-03-01/API/
> That isn't to say REST is finished, or that the RDF data model doesn't > have something to offer. But it would be nice to see more conscious > alignment with REST from the Linked Data crowd, in particular around > the issue of what URLs identify, and (for me) unhelpful terminology > like Document and Information Resource. cf. my attempt of an explanation of the relation of resource, information resource and document[1] that is based on my personal definition (at least), which also maybe have a shared understanding (that means, I don't want to say that I "own" the definitions nor to claim that I invented them - I think it's exactly the other case; I just ordered the thoughts for my own ;) ). Cheers, Bob PS: My thinking in [1] should (from my POV) align with Roy T. Fielding statement about resources: " Any information that can be named can be a resource: a document or image, a temporal service (e.g. "today's weather in Los Angeles"), a collection of other resources, a non-virtual object (e.g. a person), and so on. In other words, any concept that might be the target of an author's hypertext reference must fit within the definition of a resource."[2] which lately maybe contradicts a bit with the resource-representation-view, where representation is (strongly constrained) defined as "a sequence of bytes, plus representation metadata to describe those bytes"[2], and in combination with the "strong" relation of requested resources and retrieved representations of exactly these resources (maybe the mother of all problems in the Semantic Web world). I would say the 'temporally varying membership function' isn't the resource itself, because this function is created by its context, i.e. a name. For example I'm still a human being, or at least something, even if I wouldn't have a name. [1] http://infoserviceonto.smiy.org/2010/11/25/on-resources-information-resources-and-documents/ [2] http://www.ics.uci.edu/~taylor/documents/2002-REST-TOIT.pdf
Hello everyone, We got the release with caching support out of the door on Friday last week. You can install the gem using `(sudo) gem in wrest` - if you have the time, please do try it out; we'd appreciate any feedback you may have. Links: Source: https://github.com/kaiwren/wrest Docs: http://rdoc.info/github/kaiwren/wrest/master/frames#Caching CI server: http://ci.c42.in Thanks in advance for your time. Best, Sidu. http://c42.in http://about.me/ponnappa
Am 31.01.2011 14:48, schrieb Nathan: > make them fully interoperable in a RESTful way. As for version control > I was thinking more along the lines of git/hg/svn over http, however > rfc5829 is a good example. Re. version descriptions and usage, I can also imagine the utilization of Semantic Web ontologies (cf. discussion on semanticoverflow.com [1], rather then a central spec with a restricted set of defined link relations types for version descriptions. Generally, please correct, when I'm getting wrong here, but I do not really understand all the hassle with the link relation types. As I already stated in a comment in dret's blog [2]: "For me a link is a binary relation in its nature, which can be type (if its not typed, then we have at least one type ;) ). When looking behind the scene of RDF Model, one will also find that binary relation, that is identified by the predicate position of an RDF triple. Due to its metamodel they are called properties, which are typed. So every property assigns a specific relation type. I don't see any need for a central link relation type registry, since one can simply resolve the property URI and should ideally get a description that describes the processing model of that link type human and machine understandable (processable). Furthermore, I thought decentralization is one of main success criterias of the Web. So why working against this constraint? All in all, why not simply applying RDF Model for that issue, i.e. as it is propagated recently with RDFa? I think the knowledge representation structure of RDF Model is quite ideal therefore." Okay, one can argue today link relation type / property definitions doesn't include a description of the processing model al� HTML "anchor elements with an href attribute create a hypertext link that, when selected, invokes a retrieval request (GET) on the URI corresponding to the CDATA-encoded href attribute". However, please correct when I wrong here, also the quoted processing model description from above isn't available in a machine-processing-able form. But, I can imagine such descriptions also for this type. Cheers, Bob [1] http://www.semanticoverflow.com/questions/2815/how-do-i-know-model-the-applied-version-of-an-ontology-specification [2] http://dret.typepad.com/dretblog/2010/11/web-linking.html?cid=6a00d8341f066253ef0147e218c59e970b#comment-6a00d8341f066253ef0147e218c59e970b
Am 31.01.2011 04:08, schrieb Eric J. Bowman: > I don't understand. HTML and Atom define HTTP method operations > (beyond the default retrieval operation), SemWeb doesn't. > HTML and Atom define the processing model for the operations, when viewing the 'small set of operations' for HTTP as these one defined in [1] (by checking the reference, I'm seeing that they used the term 'method' there, however I saw Roy T. Fielding often using the term 'operation'; so, sorry for any disaccord). > Media types are meant to explain the processing model of a data type to > humans, who then write code implementing that processing model. Media > types are not targeted towards machine readability, an architectural > requirement I can't begin to fathom when mentioned as a deficiency. > The scope of the effort to fix the registry does not include machine- > readable media types. Generally, I think the TAG mailing list is currently a better place for the ongoing "URIs for media types"-discuss. Only, I small note: I don't think that Roy T. Fieldings REST definition somewhere excluded machines from understanding media type specifications, which are from my POV can also be realized via a RDF Model based description. Please remember also the Web is a "universal hybrid information space", where hybrid intents here the inclusion of human and machine agents. > The constraint is "self-descriptive messaging" and has nothing to do > with the self-describing messages (payloads) of SemWeb. All a > component needs to be able to do, when it comes to Content-Type, is > recognize the token to determine if it has the corresponding codec. > The component takes that token to be self-descriptive of the sender's > intended processing model, thus hopefully rendering HTML as plaintext > if the representation was labelled as text/plain. I guess, grasping the difference between 'self-descriptive' and 'self-describing' is hard a hard task. Since I can imagine to utilize for both Semantic Web knowledge representation languages e.g., RDF in HTTP Link header field. Especially, when seeing Roy T. Fielding suggesting the usage of RDF Model for getting processing model description that are beyond that of the media type specifications (see [2]). Cheers, Bob [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html [2] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-754
> Ok, agreed. By generalizing from documents to /anything/, the resources and their relationships (via their representations), are potentially a lot more open to direct machine processing.
>
I thought this was already done via the redefinition from UDI to URI.
> There may well also be some more useful infrastructure subsystem bits (akin to those described by Fielding in his thesis) enabled by RESTful HTTP + semweb tech. For example, an RDF store (server- or client-side) is essentially a cache of a little chunk of the Web of Data. A SPARQL endpoint offers an efficient mode of access to that cache. Going one step further, a generic intelligent linked data proxy could be composed this way.
>
Thanks for bringing up the "triple stores are caches"-idea and its
generalization "generic intelligent linked data proxy" ("triple stores
are intermediaries"; triples stores in the sense of a whole system al�
Virtuoso). To avoid Linked Data tapping in the same "data silo" pitfall
(cf. [1]). That means, we should be interested in closing the
information flow life cycle by bringing back information changes to
their origin information services. There is/was a "pushback" initiative
on the ESW wiki[2], which investigated some thoughts into this
direction. However, this initiative seems to be fell asleep somehow. One
result, I concluded also from the findings of this initiative again, is
the problem of tackling the heterogeneity of origin non-Semantic-Web
information services. So descriptions of such information services can
maybe help (see a starting point [3]), because the integration of
information services general on the Web is a crucial mission.
> Going off-topic a little, I reckon the parts needed for a really useful, compelling and *interesting* Web of data include:
>
> * improved linkage between data sets, to facilitate easier discovery
> * refactoring of traditional data-based activities to use (just-in-time) linked data
> * new user interface paradigms to work with linked data
> * a lot more imagination!
+1 ;)
Cheers,
Bob
[1]
http://dret.typepad.com/dretblog/2009/08/distribution-and-the-semantic-web.html
[2] http://esw.w3.org/PushBackDataToLegacySources
[3]
http://infoserviceonto.smiy.org/2010/11/25/on-resources-information-resources-and-documents/
Bob Ferris wrote: > Am 31.01.2011 14:48, schrieb Nathan: > >> make them fully interoperable in a RESTful way. As for version control >> I was thinking more along the lines of git/hg/svn over http, however >> rfc5829 is a good example. > > Re. version descriptions and usage, I can also imagine the utilization > of Semantic Web ontologies (cf. discussion on semanticoverflow.com [1], > rather then a central spec with a restricted set of defined link > relations types for version descriptions. > > Generally, please correct, when I'm getting wrong here, but I do not just to clarify, I was referring to to version control, diff, patch, partial representations, resource syncing and updating etc - not simply saying "this is a newer version of that". > really understand all the hassle with the link relation types. it's getting the correct balance between standardization via registration and extensibility - some media types prefer to have constrained, well defined relations, as each relation has specific meaning to a hypermedia client which recognizes that type. It's easier to tell a client to understand the token "edit" rather than 43 different URIs and all the equivalent types which may have the same meaning (random numbers, hopefully you follow). Best, Nathan
Am 31.01.2011 16:50, schrieb Nathan: > it's getting the correct balance between standardization via > registration and extensibility - some media types prefer to have > constrained, well defined relations, as each relation has specific > meaning to a hypermedia client which recognizes that type. It's easier > to tell a client to understand the token "edit" rather than 43 > different URIs and all the equivalent types which may have the same > meaning (random numbers, hopefully you follow). I don't think that extensibility via URI referenced descriptions contradicts media types or link relation types specifications that are provided via a standard registry. It rather complement one another, or? Just consider the possibility of machine-processable IETF RFC media type/link relation type specifications. I don't think that such developments should happen in the near future. However, we should keep our mind open for such extension options, or? Cheers, Bob
Bob Ferris wrote: > Am 31.01.2011 16:50, schrieb Nathan: >> it's getting the correct balance between standardization via >> registration and extensibility - some media types prefer to have >> constrained, well defined relations, as each relation has specific >> meaning to a hypermedia client which recognizes that type. It's easier >> to tell a client to understand the token "edit" rather than 43 >> different URIs and all the equivalent types which may have the same >> meaning (random numbers, hopefully you follow). > > I don't think that extensibility via URI referenced descriptions > contradicts media types or link relation types specifications that are > provided via a standard registry. It rather complement one another, or? yes, as you know I suggested just that to the TAG today (you replied :p) > Just consider the possibility of machine-processable IETF RFC media > type/link relation type specifications. that's a/the goal! > I don't think that such developments should happen in the near future. why not? > However, we should keep our mind open for such extension options, or? definitely.
Am 31.01.2011 17:17, schrieb Nathan: >> I don't think that such developments should happen in the near future. > > why not? s/should/would cf. slow motion standardisation processes. Cheers, Bob
"Danny" wrote: > > There may well also be some more useful infrastructure subsystem bits > (akin to those described by Fielding in his thesis) enabled by > RESTful HTTP + semweb tech. For example, an RDF store (server- or > client-side) is essentially a cache of a little chunk of the Web of > Data. A SPARQL endpoint offers an efficient mode of access to that > cache. Going one step further, a generic intelligent linked data > proxy could be composed this way. > There's nothing RESTful about SPARQL endpoints, see: http://dret.net/netdret/docs/wilde-wewst2009-restful-sparql.pdf -Eric
Am 31.01.2011 23:49, schrieb Eric J. Bowman: > "Danny" wrote: >> >> There may well also be some more useful infrastructure subsystem bits >> (akin to those described by Fielding in his thesis) enabled by >> RESTful HTTP + semweb tech. For example, an RDF store (server- or >> client-side) is essentially a cache of a little chunk of the Web of >> Data. A SPARQL endpoint offers an efficient mode of access to that >> cache. Going one step further, a generic intelligent linked data >> proxy could be composed this way. >> > > There's nothing RESTful about SPARQL endpoints, see: > > http://dret.net/netdret/docs/wilde-wewst2009-restful-sparql.pdf Yes, the triple store needs a proxy al� the inbuilt support for a read-only Linked Data deployment in Virtuoso [1] or the Linked Data API [2], which is currently a read-only enclosure for a SPARQL endpoint. What about a SPARQL endpoint that supports HTTP POST for every method handling? So the SPARQL endpoint resource would be treated as the "parent" resource of all resource that are stored in the triple store. Whether the SPARQL 1.1 Uniform HTTP Protocol [3] is fully REST compatible is arguable. I would say no. Cheers, Bob [1] http://virtuoso.openlinksw.com/whats-new/ [2] http://code.google.com/p/linked-data-api/wiki/API_Rationale [3] http://www.w3.org/TR/sparql11-http-rdf-update/
Bob Ferris wrote: > > What about a SPARQL endpoint that supports HTTP POST for every method > handling? So the SPARQL endpoint resource would be treated as the > "parent" resource of all resource that are stored in the triple > store. > REST has no concept of parent/child resources, particularly inferred from URIs. Such relationships are established explicitly, through linking. The problem with using POST for query submission, is that it violates the Identification of Resources constraint -- query results are resources of interest, so they need URIs. -Eric
Am 01.02.2011 01:12, schrieb Eric J. Bowman: > Bob Ferris wrote: >> >> What about a SPARQL endpoint that supports HTTP POST for every method >> handling? So the SPARQL endpoint resource would be treated as the >> "parent" resource of all resource that are stored in the triple >> store. >> > > REST has no concept of parent/child resources, particularly inferred > from URIs. Such relationships are established explicitly, through > linking. The problem with using POST for query submission, is that it > violates the Identification of Resources constraint -- query results > are resources of interest, so they need URIs. To speak in RFC2616 conform terms with the 'parent/child' relation I mean 'subordinate' (cf. "The posted entity is subordinate to that URI in the same way that a file is subordinate to a directory containing it" [1]). Redirecting to a cachable result should be allowed, or? Btw, what about the "RESTful" interface of 4store SPARQL endpoints[2]? Would a redirect from original resources (e.g. http://example.com/data) to SPARQL endpoint compatible resources (e.g. http://localhost:8000/data/http://example.com/data) help on PUT and DELETE operations to make the interface RESTful? Cheers, Bob [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.5 [2] http://4store.org/trac/wiki/SparqlServer
Bob Ferris wrote:
> Am 01.02.2011 01:12, schrieb Eric J. Bowman:
>> Bob Ferris wrote:
>>> What about a SPARQL endpoint that supports HTTP POST for every method
>>> handling? So the SPARQL endpoint resource would be treated as the
>>> "parent" resource of all resource that are stored in the triple
>>> store.
>>>
>> REST has no concept of parent/child resources, particularly inferred
>> from URIs. Such relationships are established explicitly, through
>> linking. The problem with using POST for query submission, is that it
>> violates the Identification of Resources constraint -- query results
>> are resources of interest, so they need URIs.
>
> To speak in RFC2616 conform terms with the 'parent/child' relation I
HTTP != REST
> mean 'subordinate' (cf. "The posted entity is subordinate to that URI in
> the same way that a file is subordinate to a directory containing it"
that's been cleared up by:
http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-12#section-7.5
> [1]). Redirecting to a cachable result should be allowed, or?
Yes, ironically that's what the "303 See Other" status was originally
intended for ;)
"This method exists primarily to allow the output of a
POST-activated script to redirect the user agent to a selected
resource"
Further, it's worth noting that you can simply 200 OK in reply to a
POST with a Content-Location containing a URI (different to the
effective request uri) and that signifies that the representation
included in the response is a representation of some other resource,
identified by the URI in the Content-Location.
Best,
Nathan
Why is that a problem for query submissions using POST, in general? What's the problem with posted queries that get a response like this: 201 Created ... Content-Location /query/x1lka2xjl2a8ksx .... <query result> Any subsequent repeat queries could be served a 303 Afaik, the main reason SPARQL queries are POST and not GET is because of the practical limitations on the length of a URL. Cheers, Mike On Tue, Feb 1, 2011 at 12:12 AM, Eric J. Bowman <eric@...>wrote: > Bob Ferris wrote: > > > > What about a SPARQL endpoint that supports HTTP POST for every method > > handling? So the SPARQL endpoint resource would be treated as the > > "parent" resource of all resource that are stored in the triple > > store. > > > > REST has no concept of parent/child resources, particularly inferred > from URIs. Such relationships are established explicitly, through > linking. The problem with using POST for query submission, is that it > violates the Identification of Resources constraint -- query results > are resources of interest, so they need URIs. > > -Eric > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hi all, This doesn't really address the original question on this thread, but I thought I'd add a couple of reflections as someone who has come from a background in developing semweb standards [1,2] and applications [3] but is now working much more in a restful repositories/web services/applications space [4]. I had a great time going to the dublin core conferences from 2005-2009 and getting to know the DC community. They're into managing metadata, and mostly tend to view metadata in terms of metadata records, being discrete packages of information. At the time I waved my RDF stick around a lot (e.g. [5]) encouraging everyone to stop thinking in terms of records, and start thinking in terms of the data as a graph. I.e., melt down the conceptual boundary of the metadata record, and let the metadata inside a record go free and join up with all of the other metadata it links to (which you can do relatively easily if your metadata is expressed as RDF and published as linked data). You can then use SPARQL to ask and answer questions spanning data merged from any number of records from any number of sources. Eventually, all the records in the world will have melted away to form one big happy boundary-less open-ended web (meta)data graph, which can answer any question you can think of. Job done. Then in 2010 I moved to a malaria research group who have made fantastic headway in establishing research data-sharing networks (these are networks of people, not computers) like MalariaGEN [6]. The point of sharing research data is, in our case, being able to do new kinds of science which can only happen when lots of data from many studies are pooled. I've since been involved in building web-based data repository systems that enable the capture, storage, curation and sharing of data and metadata relating to various kinds of malaria research. These are not completely open data-sharing networks - defining a data-sharing agreement between researchers is a key factor, which limits who can see and do what with what metadata, and protects the interests of the researchers and study participants. Because these repository systems tend to be at the hub of a set of other systems for analysing, curating, visualising, ..., the data, some of which are web-based, some of which are not, I've tried to build RESTful web services in from the beginning, so we have the best possible chance of being able to integrate these systems, and so the core repository services stay as lean and simple as possible. To cut an already long story short, I've found (ironically) that it's essential to draw boundaries around data and metadata, i.e., to think more in terms of records (or representations). The reasons are quite simple. Where data is being entered, updated, deleted, actively managed, ..., you need to be able to express access control policies (because not all data are public), and you need to be able to say who has responsibility for doing what on what data (because people need to be able to collaborate and coordinate their work). I've found it effective in these cases to design resource interfaces where resource boundaries are chosen to support these concerns, and not necessarily around a logical data model (e.g., you don't end up with one resource for each logical entity, rather you end up with resources whose representations span some part of the data graph, with additional link relations/hypermedia control to drive client applications). AtomPub then gives a pretty good foundation for designing representations and implementing a restful service, and XForms a pretty solid vehicle for implementing client applications. Now, if you told me to break down those boundaries, merge all the data, and let someone query it (say, with SPARQL), right now I'd run a mile, because I can't see how you could do that and still respect access controls. Maybe someone has done some clever work building access control policies into a SPARQL query engine, that would be interesting. Anyway, I just thought it was interesting that sometimes you want to melt down all the boundaries and see your data as one big open-ended graph, and sometimes you want to see your data as (er, information?) resources with concrete representations and well defined boundaries. I guess this is probably stating the obvious for most people on this list, but hopefully an interesting aside. Cheers, Alistair [1] http://www.w3.org/TR/swbp-vocab-pub/ [2] http://www.w3.org/TR/skos-reference/ [3] http://alimanfoo.wordpress.com/2011/01/17/using-sparql-for-biological-data-integration-reflections-on-openflydata-org-and-the-flyweb-project/ [4] http://alimanfoo.wordpress.com/2010/01/08/cggh-and-data-sharing-networks-background/ [5] http://alimanfoo.wordpress.com/2008/05/13/presentation-at-the-library-of-congress-simple-knowledge-organization-system-skos-in-the-context-of-semantic-web-deployment/ [6] http://www.malariagen.net/ On Mon, Jan 31, 2011 at 11:08:07AM +0000, Nathan wrote: > Hi Bob, > > Generally they are orthogonal, in the same way that a logic statement > is orthogonal to a web server. > > REST is an architectural style and set of constraints one considers > and applies when webizing a technology and the components related to > the use of that technology. > > Linked Data is webized information, each datum of information is a > logical statement expressed as a typed link, a typed link is a > statement which expresses the relation between two things, each thing > can be a literal (a string, a number, denoting itself), an unnamed > thing that merely exists (a blank node), or a globally named thing (a > named node, a logical constant). Each statement can be considered true > or false, and when you combine all the statements which you consider > true about a thing, that set of statements forms a description of the > thing (which can also be considered your current belief state, what > you believe to be true about the thing(s) you're considering). As > Linked Data is webized, it uses URIs as names for things and > relations, and encourages the exposing of statements about those > things or relations when you lookup (dereference) the corresponding URIs. > > Thus one would apply the style and constraints of REST not to Linked > Data, but to the components, the publishing of Linked Data, the > retrieval of Linked Data over a network, and to the construction of > Agents which interact with the web of linked data. > > Conversely, since REST is a set of constraints optimized for the "good > old fashioned web of documents", one can review the needs and usage of > Linked Data to come up with additional constraints, to identify > mismatches, or to create an entirely new architectural style and set > of constraints which one would then apply to the web specifications > (URI, HTTP etc) in order to optimize it. > > Such examples may include: > > - the tight coupling of URIs to protocols in the common case, for > example one may need PUT, POST, DELETE to operate through HTTP+TLS > whilst GET is through basic HTTP. > > - REST and web optimization for large-grain hypermedia transfer > whilst linked data deployment typically needs fine-grained data > transfer / huge-grain data-set transfer. > > - Analysis and inclusion of composite application state and single > client multi-server interactions. > > - The missing reverse-links. > > - Pipelined requests and async message transfer (allowing for > multiple requests to be sent at once and sent back in an optimized > fashion). > > Essentially, REST is over a decade old, and whilst very good, it > comprises constraints and a style which was optimized for common use > cases which are no longer as common, Linked Data, Client-side "ajax" > applications, cloud storage and cloud computing, version control over > HTTP and many other now common uses are simply uncatered for, and the > REST style is not optimized for these cases. > > The only other relation between REST and Linked Data, is that they > both use HTTP URIs to identify resources, the web specification and > REST definitions of the term "resource" is so abstractly defined that > this often leads to confusion and inconsistent naming of "resources", > hence httpRange-14 and many other issues. > > Best, > > Nathan > > > ------------------------------------ > > Yahoo! Groups Links > > > -- Alistair Miles Head of Epidemiological Informatics Centre for Genomics and Global Health <http://cggh.org> The Wellcome Trust Centre for Human Genetics Roosevelt Drive Oxford OX3 7BN United Kingdom Web: http://purl.org/net/aliman Email: alimanfoo@... Tel: +44 (0)1865 287669
Mike Kelly wrote: > > Why is that a problem for query submissions using POST, in general? > Because you aren't submitting anything, you're dereferencing something. It is always a REST mismatch to dereference with any method other than GET. Queries don't create anything; think of how many people execute the same searches on Google -- are they each creating a new search result? Or is that result a resource they're all dereferencing via the same URI? The method must match the semantics of the operation. -Eric
Mike Kelly wrote: > > What's the problem with posted queries that get a response like this: > > 201 Created > ... > Content-Location /query/x1lka2xjl2a8ksx > .... > <query result> > > Any subsequent repeat queries could be served a 303 > Sounds idempotent to me, so why the non-idempotent method? Nothing was added to the graph being queried, so what exactly was "created"? > > Afaik, the main reason SPARQL queries are POST and not GET is because > of the practical limitations on the length of a URL. > Did you read the PDF I linked to? That isn't one of the reasons discussed, and I don't buy it in general -- of all the problems identified with the media type registry, problems with the submission form due to the length of URIs it generates isn't on the list. Also, there's no reason for SPARQL to generate such long URIs -- a RESTful interface wouldn't give URIs any longer than those commonly handled, without any problems, by Google. -Eric
Am 01.02.2011 21:55, schrieb Eric J. Bowman: > Mike Kelly wrote: >> >> What's the problem with posted queries that get a response like this: >> >> 201 Created >> ... >> Content-Location /query/x1lka2xjl2a8ksx >> .... >> <query result> >> >> Any subsequent repeat queries could be served a 303 >> > > Sounds idempotent to me, so why the non-idempotent method? Nothing was > added to the graph being queried, so what exactly was "created"? SPARQL provides a CONSTRUCT method, where one is able to create new RDF triples. The created triples can then be serialized in a representation that would be delivered via the URI of the 303 redirection response. Another case would be a status report of the query execution. Cheers, Bob
Bob Ferris wrote: > > SPARQL provides a CONSTRUCT method, where one is able to create new > RDF triples. The created triples can then be serialized in a > representation that would be delivered via the URI of the 303 > redirection response. Another case would be a status report of the > query execution. > Sure, but none of this is implemented RESTfully, which is not to say that a REST API for SPARQL isn't possible. What one wouldn't do is mix idempotent and non-idempotent operations into the same API, which is the current state of affairs. -Eric
> > What one wouldn't do is mix idempotent and non-idempotent operations > into the same API... > I meant hypertext control, not API. -Eric
Hi Ed,
On Mon, Jan 31, 2011 at 05:30:12AM -0500, Ed Summers wrote:
> The key problem for me is that when semweb folks talk about
> "documents" I can't help but hear "representations". For example, when
> I'm publishing something on the web with a web framework like
> RubyOnRails or whatever, I have a model of something, say a User, and
> a URL route like /user/:id, and some controller code that goes and
> fetches that model instance and delivers up some HTML for the user
> using a template. Every time I do something like this it just isn't
> feasible for me to think, hmmm is this URL identifying a real world
> object? Is the database record for the User an Information Resource?
> Or is my database record about a Real World Thing or a Document?
> Should I really have two URL paths here, one for the Document about
> the User and one for the User themself? Should I use a # in that URI,
> or use the 303 redirect to indicate it is the Real World Object?
I know exactly what you mean! In the applications and web services I've
been involved in recently, I haven't ever needed to stop and ask the $64m
metaphysical question, "is this an information resource or not?"
All you end up caring about is, can my user or client get the data they need,
and can they follow their nose from there to do whatever they need to do next
(get more data/modify some data/process some data/...)?
One reason why I've liked using content-based extensions to Atom recently
is that it gives you a fairly intuitive convention for separating metadata
from data. E.g., when you GET /person/123 you receive...
<entry xmlns="http://www.w3.org/2005/Atom">
<id>http://example.org/fooapp/service/person/123</id>
<title>Jane Bloggs</title>
<published>2010-10-14T18:29:48+01:00</published>
<updated>2010-10-15T19:39:01+01:00</updated>
<link rel="edit" href="http://example.org/fooapp/service/person/123"/>
<author>
<email>jane.bloggs@...</email>
</author>
<content type="application/x.exampleapp+xml">
<user xmlns="">
<givenname>Jane</givenname>
<familyname>Bloggs</familyname>
<!-- etc. -->
</user>
</content>
</entry>
...and it's fairly intuitive that things like published, updated and author
in the "head" of the representation are *about* the data, and things like
givenname, familyname, etc., *are* the data (in the same way that it's
intuitive that stuff in the <head> of an HTML representation is *about*
the content, and stuff in the <body> element *is* the content to be rendered).
Now, say your clients want to be able to navigate to relatives of Jane. You
might do something like...
<entry xmlns="http://www.w3.org/2005/Atom">
<id>http://example.org/fooapp/service/person/123</id>
<title>Jane Bloggs</title>
...
<link rel="http://example.org/fooapp/rel/mother" href="http://example.org/fooapp/service/person/456"/>
<link rel="http://example.org/fooapp/rel/father" href="http://example.org/fooapp/service/person/789"/>
...
</entry>
...or you might do...
<entry xmlns="http://www.w3.org/2005/Atom">
<id>http://example.org/fooapp/service/person/123</id>
<title>Jane Bloggs</title>
...
<content type="application/x.exampleapp+xml">
<user xmlns="">
...
<mother href="http://example.org/fooapp/service/person/456"/>
<father href="http://example.org/fooapp/service/person/789"/>
...
</user>
</content>
</entry>
...and it's an interesting side question which is better, but the point is
that from a hypermedia constraint point of view it doesn't really matter
I don't think, because either way clients will have what they need to be
able to make appropriate state transitions, and it certainly doesn't matter
whether http://example.org/fooapp/service/person/456 identifies an information
resource or not. In fact, if you then told me I had to do 303s from that URL
to some other, as a web service engineer I'd get frustrated, because that
would start introducing extra round trips which would slow everything down.
Don't get me wrong, I totally sympathise with people who say "don't conflate
yourself with your homepage", but I guess I'm saying that I can see how a
linked data engineer and a restful web service engineer would both look at
each other's systems and go, "you did what?"
Cheers,
Alistair
>
> YAGNI
>
> I've found it easier to look at the World Wide Web through REST
> colored glasses where my URLs identify Resources, and my Server
> delivers up Representations of them. And yet, RDF remains a nice data
> model (with a few decent Representations) for describing web graphs,
> and it has rdf:type for explicitly documenting the nature of the
> Resource.
>
> //Ed
>
> [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1_1
> [2] http://www.w3.org/TR/cooluris/#distinguishing
> [3] http://www.w3.org/DesignIssues/HTTP-URI2
> [4] http://www.w3.org/TR/cooluris/#oldweb
> [5] http://www.w3.org/TR/webarch/#id-resources
> [6] http://www.w3.org/TR/cooluris/#semweb
--
Alistair Miles
Head of Epidemiological Informatics
Centre for Genomics and Global Health <http://cggh.org>
The Wellcome Trust Centre for Human Genetics
Roosevelt Drive
Oxford
OX3 7BN
United Kingdom
Web: http://purl.org/net/aliman
Email: alimanfoo@...
Tel: +44 (0)1865 287669
Hi Alistair, Am 02.02.2011 09:32, schrieb Alistair Miles: > Hi Ed, > > On Mon, Jan 31, 2011 at 05:30:12AM -0500, Ed Summers wrote: >> The key problem for me is that when semweb folks talk about >> "documents" I can't help but hear "representations". For example, when >> I'm publishing something on the web with a web framework like >> RubyOnRails or whatever, I have a model of something, say a User, and >> a URL route like /user/:id, and some controller code that goes and >> fetches that model instance and delivers up some HTML for the user >> using a template. Every time I do something like this it just isn't >> feasible for me to think, hmmm is this URL identifying a real world >> object? Is the database record for the User an Information Resource? >> Or is my database record about a Real World Thing or a Document? >> Should I really have two URL paths here, one for the Document about >> the User and one for the User themself? Should I use a # in that URI, >> or use the 303 redirect to indicate it is the Real World Object? > > I know exactly what you mean! In the applications and web services I've > been involved in recently, I haven't ever needed to stop and ask the $64m > metaphysical question, "is this an information resource or not?" > > All you end up caring about is, can my user or client get the data they need, > and can they follow their nose from there to do whatever they need to do next > (get more data/modify some data/process some data/...)? > > One reason why I've liked using content-based extensions to Atom recently > is that it gives you a fairly intuitive convention for separating metadata > from data. E.g., when you GET /person/123 you receive... > > <entry xmlns="http://www.w3.org/2005/Atom"> > <id>http://example.org/fooapp/service/person/123</id> > <title>Jane Bloggs</title> > <published>2010-10-14T18:29:48+01:00</published> > <updated>2010-10-15T19:39:01+01:00</updated> > <link rel="edit" href="http://example.org/fooapp/service/person/123"/> > <author> > <email>jane.bloggs@...</email> > </author> > <content type="application/x.exampleapp+xml"> > <user xmlns=""> > <givenname>Jane</givenname> > <familyname>Bloggs</familyname> > <!-- etc. --> > </user> > </content> > </entry> > > ...and it's fairly intuitive that things like published, updated and author > in the "head" of the representation are *about* the data, and things like > givenname, familyname, etc., *are* the data (in the same way that it's > intuitive that stuff in the<head> of an HTML representation is *about* > the content, and stuff in the<body> element *is* the content to be rendered). > > Now, say your clients want to be able to navigate to relatives of Jane. You > might do something like... > > <entry xmlns="http://www.w3.org/2005/Atom"> > <id>http://example.org/fooapp/service/person/123</id> > <title>Jane Bloggs</title> > ... > <link rel="http://example.org/fooapp/rel/mother" href="http://example.org/fooapp/service/person/456"/> > <link rel="http://example.org/fooapp/rel/father" href="http://example.org/fooapp/service/person/789"/> > ... > </entry> > > ...or you might do... > > <entry xmlns="http://www.w3.org/2005/Atom"> > <id>http://example.org/fooapp/service/person/123</id> > <title>Jane Bloggs</title> > ... > <content type="application/x.exampleapp+xml"> > <user xmlns=""> > ... > <mother href="http://example.org/fooapp/service/person/456"/> > <father href="http://example.org/fooapp/service/person/789"/> > ... > </user> > </content> > </entry> > > ...and it's an interesting side question which is better, but the point is > that from a hypermedia constraint point of view it doesn't really matter > I don't think, because either way clients will have what they need to be > able to make appropriate state transitions, and it certainly doesn't matter > whether http://example.org/fooapp/service/person/456 identifies an information > resource or not. In fact, if you then told me I had to do 303s from that URL > to some other, as a web service engineer I'd get frustrated, because that > would start introducing extra round trips which would slow everything down. > > Don't get me wrong, I totally sympathise with people who say "don't conflate > yourself with your homepage", but I guess I'm saying that I can see how a > linked data engineer and a restful web service engineer would both look at > each other's systems and go, "you did what?" That's exactly, what I also got, get and will get in mind all the time. On the one side, services that don't use the common, layered Semantic Web technology stack get it to work without the extra round trips. On the other side, they are also able to express nearly the same content. I can imagine, that I can model the example from above also with Semantic Web ontologies, so everything is RDF Model based (I can write you that example, if you like). I also wouldn't have a problem with serving a representation of a description of a resource, because I treat information resources as an abstract description (cf. [1]). The representation, that would be delivered to the client, contains this information resource then in a realized form, i.e. a semantic graph in RDF Model which is serialized as RDFa. Furthermore, I can also clearly separate between data that describes the requested resource (via the resource URI) and data that describes the concrete representation (via the RDFa document URI), and if I want also data that describes the concrete description (via the RDF Model realization URI). The "only" we maybe should accept, as might already do, is a kind of loose coupling between the resource one requests and the response (representation) one gets delivered. I feel more comfortable with "200 OK ... GET an entity corresponding to the requested resource is sent in the response; " (as defined in [2]) as with "200 OK ... GET a representation of the target resource is sent in the response;" (as defined in [3]), where the 'target resource' must unchangeable identified by the given resource URI (see [4], whereby for other methods the Content-Location header can used and loose coupling is accepted; cf. also the related discussion about that topic [5]). So, I can only reply myself in saying that a client more or less want only something processable when requesting a resource URI. In addition, I had the feeling that Roy T. Fielding neglected the 'description part' a bit, when formulating the REST principles. However, the 'description part' is omnipresent. For example, I can map a concrete natural language text description to a concrete hypertext representation e.g., HTML. The concrete hypertext representation contains then the concrete natural language text description. In addition to that, I can map a concrete natural language text description to a concrete formal semantic graph description. Both concrete descriptions have then in common an abstract description (which I would name 'information resource', cf. [1]), which can have multiple concrete realizations. All in all, there's not much difference between the goals, or? Cheers, Bob [1] http://infoserviceonto.smiy.org/2010/11/25/on-resources-information-resources-and-documents/ [2] http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1 [3] http://greenbytes.de/tech/webdav/draft-ietf-httpbis-p2-semantics-12.html#status.200 [4] http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-12#section-6 [5] http://lists.w3.org/Archives/Public/www-tag/2011Jan/0002.html
Am 01.02.2011 23:01, schrieb Eric J. Bowman: > Bob Ferris wrote: >> >> SPARQL provides a CONSTRUCT method, where one is able to create new >> RDF triples. The created triples can then be serialized in a >> representation that would be delivered via the URI of the 303 >> redirection response. Another case would be a status report of the >> query execution. >> > > Sure, but none of this is implemented RESTfully, which is not to say > that a REST API for SPARQL isn't possible. That's what Mike (and may some others here) and I tried to describe, or? So, I don't understand, why the whole system couldn't be RESTful, when I use HTTP POST for communicating with a SPARQL endpoint, which would delegate me to (GET-able) resources - whether they are serializations of SPARQL CONSTRUCTs or status reports of query executions (cf. [1]). Cheers, Bob [1] http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post
Bob Ferris wrote: > > [1] http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post > Please don't confuse a post about how POST isn't unRESTful, as saying that it's ever even remotely OK to use POST as a retrieval method. This is REST 101, and once again I've stumbled across something I never imagined could possibly be controversial when I posted it. -Eric
Hi Alistair, thanks a lot for practise report. Am 01.02.2011 17:11, schrieb Alistair Miles: > To cut an already long story short, I've found (ironically) that it's essential > to draw boundaries around data and metadata, i.e., to think more in terms > of records (or representations). The reasons are quite simple. Where data is > being entered, updated, deleted, actively managed, ..., you need to be able > to express access control policies (because not all data are public), and > you need to be able to say who has responsibility for doing what on what data > (because people need to be able to collaborate and coordinate their work). Web-Access-Control[1] is still an issue that have to be tackled. However, also there is some progress to establish it on the data level, i.e. WebID[2] or more coarse-grained CORS[3]. > > Now, if you told me to break down those boundaries, merge all the data, and > let someone query it (say, with SPARQL), right now I'd run a mile, because > I can't see how you could do that and still respect access controls. Maybe > someone has done some clever work building access control policies into a > SPARQL query engine, that would be interesting. I guess, nobody expect from you to "merge all the data" and no common human agent is interested in having all data e.g., I want only additional stuff that related . With SPARQL 1.1 the SERVICE operator should be introduced to enable federated queries across SPARQL services (that can also be describe, see [4]). Of course, we need to get to run such systems efficiently. I'm also able to combine Web-Access-Control with federated queries e.g., an example from Virtuoso [5] for protected SPARQL endpoint access. Again, under the overall and universal pre-condition of an acceptable performance. I guess, the developments of hardware that can process such tasks efficiently are promising and quite faster than in the past, or? > Anyway, I just thought it was interesting that sometimes you want to melt > down all the boundaries and see your data as one big open-ended graph, and > sometimes you want to see your data as (er, information?) resources with > concrete representations and well defined boundaries. Good statement! Although, what I see and what I can get are two different things, or? I guess, a user would be more exited about "open doors" rather then "closed rooms". The starting point might be still "give me man app for that". However, this app should be one with (literally) "open doors". Whether the user would be delegated seamlessly (!) to another app (e.g. change from facebook to youtube) or the task e.g., an exploration, would be continue inside this app (e.g. playing back the youtube video inside facebook) shouldn't really matter (somehow). What counts is the overall user experience, or? I tend to say, that today an "app delegation" isn't always such seemless. Cheers, Bob [1] http://esw.w3.org/WebAccessControl [2] http://www.w3.org/2005/Incubator/webid/charter [3] http://www.w3.org/TR/sparql11-query/ [4] http://www.w3.org/TR/sparql11-service-description/ [5] http://ods.openlinksw.com/wiki/main/Main/VirtSPARQLSSL
[oops, I meant to send this to the list, that yahoo interface is confusing!]
> "Danny" wrote:
>>
>> While YMMV, all the SPARQL endpoints I've played with have been
>> entirely consistent with REST.
>>
>
> Really? I've never seen even one.
>
>>
>> For example, if I go to:
>>
>> http://api.talis.com/stores/bbc-backstage/services/sparql
>>
>> and enter the query :
>>
>> select ?s where { ?s ?p ?o }
>> limit 10
>>
>> then click the "Search" button, I get a bunch of results in SPARQL
>> results format.
>>
>
> Granted, that's what happens, but this is not REST.
When the URI associated with a SPARQL query is dereferenced with HTTP,
a representation of the identified resource is transferred from server
to client agent. What's not RESTful about that?
>> Endpoints can be certainly conceptualised as a kind of RPC target,
>> that's what the setup provided at most endpoint URIs looks like: a
>> form with a box for entering the query and a button to run it on a
>> remote system. But this is misleading. The form box is really just an
>> aid to composing URIs.
>>
>
> No, it isn't. When hypertext is the engine of application state, the
> form provides instructions for how to build URIs. Foreknowledge of a
> query language is required to make use of SPARQL endpoints, this is not
> hypertext driving application state.
In this specific case it's just a difference in the user interface.
Here's a URI:
http://api.talis.com/stores/bbc-backstage/services/sparql?query=select+distinct+%3Fs+%3Fp+%3Fo+where+{+%0D%0A%3Fs+%3Fp+%3Fo%0D%0A}+%0D%0Alimit+500%0D%0A
Your agent doesn't need any knowledge of SPARQL to use that any more
than it needs to know the BBC's URI construction system to use:
http://www.bbc.co.uk/radio4/features/classic-chandler/
But quibbling about this isn't very interesting. One interesting
aspect is that the hypertext part of linked data is qualitatively
different than that which we are familiar with in HTML.
While :
<a href="http://danny.ayers.name/">My Home Page</a>
and
#me foaf:homepage <http://danny.ayers.name/index.rdf#me> .
say essentially the same thing, the latter (shorthand for
<http://danny.ayers.name/index.rdf#me>
<http://xmlns.com/foaf/0.1/homepage> <http://danny.ayers.name> .)
offers two extra opportunities for manipulating application state.
SPARQL resource format is also hyperlink-rich (in fact CONSTRUCT
queries have RDF results), and I think we've hardly started scratching
the surface of the potential.
Cheers,
Danny.
--
http://danny.ayers.name
Danny Ayers wrote:
>
> For example, if I go to:
>
> http://api.talis.com/stores/bbc-backstage/services/sparql
>
> and enter the query :
>
> select ?s where { ?s ?p ?o }
> limit 10
>
> then click the "Search" button, I get a bunch of results in SPARQL
> results format.
>
When I go to that page, I see not even a clue about the nature of the
interface, other than that I'll need the out-of-band knowledge of some
query language to use it. Where are the instructions for how to
transition to the next application state, given *any* goal? This is
indeed an RPC endpoint, not a hypertext API.
The corollary is to run your weblog by providing a textbox which takes a
SQL query, instead of encapsulating SQL within a hypertext interface
(i.e. running WordPress). This is precisely what Roy is talking about,
in his final bullet point, here:
http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
Also, from the comments to that post:
"When I say hypertext, I mean the simultaneous presentation of
information and controls such that the information becomes the
affordance through which the user (or automaton) obtains choices and
selects actions... Machines can follow links when they understand the
data format and relationship types... It is the same basic issue as
with human communication: we will always need a common vocabulary to
make sense of it. Exposing that vocabulary in the representations makes
it easy to learn and be adopted by others."
The data format is HTML, which says nothing about SPARQL, and there is
no link relation. So the vocabulary isn't exposed in hypertext at all.
The interaction is not based on the information presented in the
hypertext, therefore it is being driven by out-of-band information.
Google's search API (though not entirely RESTful) accepts keywords,
with a syntax defined here:
http://www.google.com/advanced_search
It should be obvious that there's a very fundamental difference between
Google's homepage, and the advanced_search page -- the former relies on
out-of-band information to add '&num=10', the latter makes it a RESTful
hypertext control; SPARQL endpoints don't even encode number of results
as a name/value pair, instead making it part of one opaque search
phrase (limit+10 tacked on at the end) and needlessly complicating the
issue of input validation on both the client and the server sides.
Taking Google's advanced_search interface a little further, RDFa could
be used to describe the "results per page" control, and type it as an
integer. A more advanced forms language could express the range that
the server will accept. This allows client-side input validation.
Google allows any value; what would make more sense would be to take
their form control literally -- limiting results-per-page to a set
number of values improves cacheability.
>
> then click the "Search" button, I get a bunch of results in SPARQL
> results format.
>
No, it returns a representation as application/xml, which means I need
to sniff in order to determine that it's a SPARQL result. To meet the
self-descriptive messaging constraint of REST, the results would
properly be sent as application/sparql-results+xml, but making that
change alone won't make the API RESTful. As the results from an actual
hypertext API, it makes a fine media type, although I'd personally tack
on an XML PI to call some XSLT to transform it into HTML, assuming my
hypertext interface was also HTML.
>
> and enter the query :
>
> select ?s where { ?s ?p ?o }
> limit 10
>
How do I know what to enter, when instead of entering keywords for a
search, I have to enter a query formatted in a manner not afforded
through hypertext controls? A REST API would have one hypertext
control for select=, providing me with the options the server has
implemented. Instead of making users guess at what namespaces are
supported, a REST API would provide that list as a hypertext control.
The server tells the user-agent the parameters of the API, such that
the user-agent only needs to fill in the search terms (keywords, not
instructions, particularly not instructions which amount to tunneling a
custom method like CONSTRUCT over POST).
*That's* what I mean by providing instructions for how to execute a
state transition, not urlencoding an opaque query language and letting
the server sort it out. The goal of a REST API is not to encode query
languages as URIs this way, it's to abstract away such implementation
details behind a generic interface. No (reasonable) CMS based on SQL
presents SQL queries as URIs or in hypertext, that implementation
detail is abstracted away behind the interface, which is exactly how
SPARQL can be made RESTful (as opposed to providing non-hypertext-API
endpoints). The server converts the request into a SPARQL query for a
back-end system in REST, as opposed to exposing a SPARQL endpoint -- no
different from how SQL is handled in REST APIs.
There is one way I can think of to use SPARQL queries in a REST app,
which is to POST or PUT a representation as application/sparql-query to
some URI. Dereferencing that URI executes the query as a stored
procedure, returning application/sparql-results+xml by default, but
can also return the original query with Accept: application/sparql-
query. I've used the eXist DB this way, creating cells containing
XQuery, which is a nice way to create a Web app from an XML store.
-Eric
Danny Ayers wrote:
>
> [oops, I meant to send this to the list, that yahoo interface is
> confusing!]
>
Note that you're also quoting me, not Bob... Let's take this to a new
thread, shall we?
-Eric
>
> > "Danny" wrote:
> >>
> >> While YMMV, all the SPARQL endpoints I've played with have been
> >> entirely consistent with REST.
> >>
> >
> > Really? I've never seen even one.
> >
> >>
> >> For example, if I go to:
> >>
> >> http://api.talis.com/stores/bbc-backstage/services/sparql
> >>
> >> and enter the query :
> >>
> >> select ?s where { ?s ?p ?o }
> >> limit 10
> >>
> >> then click the "Search" button, I get a bunch of results in SPARQL
> >> results format.
> >>
> >
> > Granted, that's what happens, but this is not REST.
>
> When the URI associated with a SPARQL query is dereferenced with HTTP,
> a representation of the identified resource is transferred from server
> to client agent. What's not RESTful about that?
>
> >> Endpoints can be certainly conceptualised as a kind of RPC target,
> >> that's what the setup provided at most endpoint URIs looks like: a
> >> form with a box for entering the query and a button to run it on a
> >> remote system. But this is misleading. The form box is really just
> >> an aid to composing URIs.
> >>
> >
> > No, it isn't. When hypertext is the engine of application state,
> > the form provides instructions for how to build URIs.
> > Foreknowledge of a query language is required to make use of SPARQL
> > endpoints, this is not hypertext driving application state.
>
> In this specific case it's just a difference in the user interface.
>
> Here's a URI:
>
> http://api.talis.com/stores/bbc-backstage/services/sparql?query=select+distinct+%3Fs+%3Fp+%3Fo+where+{+%0D%0A%3Fs+%3Fp+%3Fo%0D%0A}+%0D%0Alimit+500%0D%0A
>
> Your agent doesn't need any knowledge of SPARQL to use that any more
> than it needs to know the BBC's URI construction system to use:
>
> http://www.bbc.co.uk/radio4/features/classic-chandler/
>
> But quibbling about this isn't very interesting. One interesting
> aspect is that the hypertext part of linked data is qualitatively
> different than that which we are familiar with in HTML.
>
> While :
>
> <a href="http://danny.ayers.name/">My Home Page</a>
>
> and
>
> #me foaf:homepage <http://danny.ayers.name/index.rdf#me> .
>
> say essentially the same thing, the latter (shorthand for
> <http://danny.ayers.name/index.rdf#me>
> <http://xmlns.com/foaf/0.1/homepage> <http://danny.ayers.name> .)
> offers two extra opportunities for manipulating application state.
>
> SPARQL resource format is also hyperlink-rich (in fact CONSTRUCT
> queries have RDF results), and I think we've hardly started scratching
> the surface of the potential.
>
> Cheers,
> Danny.
>
> --
> http://danny.ayers.name
On 2 February 2011 15:40, Eric J. Bowman <eric@...> wrote: > Danny Ayers wrote: >> >> [oops, I meant to send this to the list, that yahoo interface is >> confusing!] >> > > Note that you're also quoting me, not Bob... Oops again, sorry, it was never like this with the telegraph... Â Let's take this to a new > thread, shall we? Although I'd like to hear your opinion on how SPARQL endpoints aren't RESTful, I haven't really got anything to add on that aspect. -- http://danny.ayers.name
Hi Eric,
thanks a lot for clarification the SPARQL-to-REST relation. So I can
conclude that SPARQL endpoint/SPARQL query interface at least al� the
advanced_search of Google can be RESTful. I didn't think that needs a
separate (query and/or) result media type, since one is able to
serialize such results also into representation formats of RDF e.g., RDFa.
The thing I had and have always in mind was, of course, a more advanced
query interface than a simple text box (so, sorry that this obviously
causes misinterpretations). Even an interface like that of the
advanced_search of Google isn't quite comfortable, or? I rather can
imagine a kind of faceted browsing interface to formulate a query, where
the end user didn't really get in touch with the statements behind. This
depends of course on the specific application domain, but generally one
often needs such contexts like time or place. So selecting an
appropriated time interval on a timeline interface, or selecting a
place/area on a world map interface might be better opportunity, or?
Cheers,
Bob
Am 02.02.2011 15:40, schrieb Eric J. Bowman:
> Danny Ayers wrote:
>>
>> For example, if I go to:
>>
>> http://api.talis.com/stores/bbc-backstage/services/sparql
>>
>> and enter the query :
>>
>> select ?s where { ?s ?p ?o }
>> limit 10
>>
>> then click the "Search" button, I get a bunch of results in SPARQL
>> results format.
>>
>
> When I go to that page, I see not even a clue about the nature of the
> interface, other than that I'll need the out-of-band knowledge of some
> query language to use it. Where are the instructions for how to
> transition to the next application state, given *any* goal? This is
> indeed an RPC endpoint, not a hypertext API.
>
> The corollary is to run your weblog by providing a textbox which takes a
> SQL query, instead of encapsulating SQL within a hypertext interface
> (i.e. running WordPress). This is precisely what Roy is talking about,
> in his final bullet point, here:
>
> http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
>
> Also, from the comments to that post:
>
> "When I say hypertext, I mean the simultaneous presentation of
> information and controls such that the information becomes the
> affordance through which the user (or automaton) obtains choices and
> selects actions... Machines can follow links when they understand the
> data format and relationship types... It is the same basic issue as
> with human communication: we will always need a common vocabulary to
> make sense of it. Exposing that vocabulary in the representations makes
> it easy to learn and be adopted by others."
>
> The data format is HTML, which says nothing about SPARQL, and there is
> no link relation. So the vocabulary isn't exposed in hypertext at all.
> The interaction is not based on the information presented in the
> hypertext, therefore it is being driven by out-of-band information.
> Google's search API (though not entirely RESTful) accepts keywords,
> with a syntax defined here:
>
> http://www.google.com/advanced_search
>
> It should be obvious that there's a very fundamental difference between
> Google's homepage, and the advanced_search page -- the former relies on
> out-of-band information to add '&num=10', the latter makes it a RESTful
> hypertext control; SPARQL endpoints don't even encode number of results
> as a name/value pair, instead making it part of one opaque search
> phrase (limit+10 tacked on at the end) and needlessly complicating the
> issue of input validation on both the client and the server sides.
>
> Taking Google's advanced_search interface a little further, RDFa could
> be used to describe the "results per page" control, and type it as an
> integer. A more advanced forms language could express the range that
> the server will accept. This allows client-side input validation.
> Google allows any value; what would make more sense would be to take
> their form control literally -- limiting results-per-page to a set
> number of values improves cacheability.
>
>>
>> then click the "Search" button, I get a bunch of results in SPARQL
>> results format.
>>
>
> No, it returns a representation as application/xml, which means I need
> to sniff in order to determine that it's a SPARQL result. To meet the
> self-descriptive messaging constraint of REST, the results would
> properly be sent as application/sparql-results+xml, but making that
> change alone won't make the API RESTful. As the results from an actual
> hypertext API, it makes a fine media type, although I'd personally tack
> on an XML PI to call some XSLT to transform it into HTML, assuming my
> hypertext interface was also HTML.
>
>>
>> and enter the query :
>>
>> select ?s where { ?s ?p ?o }
>> limit 10
>>
>
> How do I know what to enter, when instead of entering keywords for a
> search, I have to enter a query formatted in a manner not afforded
> through hypertext controls? A REST API would have one hypertext
> control for select=, providing me with the options the server has
> implemented. Instead of making users guess at what namespaces are
> supported, a REST API would provide that list as a hypertext control.
> The server tells the user-agent the parameters of the API, such that
> the user-agent only needs to fill in the search terms (keywords, not
> instructions, particularly not instructions which amount to tunneling a
> custom method like CONSTRUCT over POST).
>
> *That's* what I mean by providing instructions for how to execute a
> state transition, not urlencoding an opaque query language and letting
> the server sort it out. The goal of a REST API is not to encode query
> languages as URIs this way, it's to abstract away such implementation
> details behind a generic interface. No (reasonable) CMS based on SQL
> presents SQL queries as URIs or in hypertext, that implementation
> detail is abstracted away behind the interface, which is exactly how
> SPARQL can be made RESTful (as opposed to providing non-hypertext-API
> endpoints). The server converts the request into a SPARQL query for a
> back-end system in REST, as opposed to exposing a SPARQL endpoint -- no
> different from how SQL is handled in REST APIs.
>
> There is one way I can think of to use SPARQL queries in a REST app,
> which is to POST or PUT a representation as application/sparql-query to
> some URI. Dereferencing that URI executes the query as a stored
> procedure, returning application/sparql-results+xml by default, but
> can also return the original query with Accept: application/sparql-
> query. I've used the eXist DB this way, creating cells containing
> XQuery, which is a nice way to create a Web app from an XML store.
>
> -Eric
Bob Ferris wrote: > > thanks a lot for clarification the SPARQL-to-REST relation. > You're welcome. As per usual, nobody has to read my long-winded explanations, but it does help me to write them... > > The thing I had and have always in mind was, of course, a more > advanced query interface than a simple text box (so, sorry that this > obviously causes misinterpretations). > The reason this causes misinterpretation, is that the nature of the hypertext controls makes all the difference in the world as to whether or not an API is RESTful. I've not seen an example of a SPARQL endpoint that isn't just a textarea, so I assume that's what's meant by SPARQL endpoint. "RESTful SPARQL API" is non-sequitir to me, because if I were to implement SPARQL, none of its syntax would leak into the URIs or the representations (except to return application/sparql-result +xml if negotiated for) -- I'd have an RDF-aware "RESTful search API". > > So I can conclude that SPARQL endpoint/SPARQL query interface at > least alá the advanced_search of Google can be RESTful. > Right, the problem isn't creating an interface which *accepts* SPARQL syntax; the problem is creating an interface *for* SPARQL syntax. The drawback is that it takes some more work to realize the concept of cross-site queries, than just knowing the SPARQL endpoint address for each site. A hypertext control for the number of results to return, might be marked up differently on each site. RDFa allows those controls to describe themselves using a common vocabulary (which doesn't yet exist) for gathering search data (including locations and dates). Manipulating the controls depends on the user-agent's understanding of that vocabulary, plus whatever forms language is used. Note that when I mention RDFa, I'm talking about a layer above REST that's completely optional. My goal is to have the same API for humans and machines, and I believe RDFa allows one representation to service both types of user. Anyway, RESTful search APIs (regardless of the technologies used to implement) with RDFa seems a more logical way forward to me, than SPARQL endpoints (which I have a gut feeling will lead to "SPARQL injection" attacks, too much "surface area" for me). > > I didn't think that needs a separate (query and/or) result media > type, since one is able to serialize such results also into > representation formats of RDF e.g., RDFa. > You're right. The SPARQL media types may come in handy in some cases, while being irrelevant in others, but all achieving the common goal of returning the same list of links for the same query. Meaning there's more than one format for representing the same resource, which is why we have conneg; and that SPARQL media types aren't a prerequisite for a RESTful API which happens to use SPARQL on the backend. > > Even an interface like that of the advanced_search of Google isn't > quite comfortable, or? > I chose Google as an example, to compare and contrast the homepage interface with the advanced interface. It could be more user-friendly, sure, but the point is that I've learned how to formulate queries without that interface, by using that interface -- it's a self- documenting API. I couldn't have learned Google search syntax from the homepage. SPARQL endpoints, as they currently exist, don't inform me how to formulate queries by using that interface (I'm expected to already know). > > I rather can imagine a kind of faceted browsing interface to > formulate a query, where the end user didn't really get in touch with > the statements behind. > Right; abstracting away the implementation details behind the interface is kinda the point. Or, "cool URIs don't change" (although URI design is only orthogonally related to REST). Searching a collection of cat photos for cats who look like Hitler, is a goal. If the implementation is a SPARQL endpoint which simply urlencodes the query, what happens to that URI when the system upgrades from SPARQL to (hypothetical) GLITR? Whereas abstracting away the specifics of SPARQL allows the backend to be changed, to construct a GLITR query from the same request instead of a SPARQL query -- without changing the hypertext, even, assuming a detailed interface (as opposed to 'enter SPARQL query here') and (optionally) common search-form vocabulary. Maybe GLITR has more options, but the data I'm looking for needs to be collected regardless of search language, so the API for collecting that data shouldn't need to be changed -- aka "REST APIs don't need versioning." Design for longevity -- any implementation detail can be swapped out without breaking the system, provided it's been properly decoupled. Coupling your URIs to your back-end query syntax, locks you into that choice (unless you figure out some hairy redirection algorithms). Implementation details, like SPARQL, should not impact your URI allocation scheme. > > This depends of course on the specific application domain, but > generally one often needs such contexts like time or place. So > selecting an appropriated time interval on a timeline interface, or > selecting a place/area on a world map interface might be better > opportunity, or? > Yes. Assume the collection of cat photos includes birth/death dates. XForms processors include a nifty pop-up calendar date-picker for any field that's XSD-typed as a date. By manipulating the form, I discover the URI which returns "all living Hitler cats" based on choosing today's date and entering "hitler" as a keyword, etc. Or, I can just enter a date manually -- the nature of the control isn't important, only the nature of the data it collects. This self-documenting API has now given me all the information required to create a dynamic resource on an unrelated domain (serendipitous re- use), i.e. a dynamic "all living Hitler cats" Web page which uses Code- on-Demand to get the current date from the user-agent, and uses that date to build the query URI. I can also learn, by driving the form, how to highlight applicable cats on their birthdays. Why should I have to re-code that page every couple of years when the service changes technology and breaks its old URIs? While "cool URIs don't change" isn't a constraint, following REST does tend to get you mostly there by encapsulating whatever back-end technologies are used, instead of wearing them on the ol' sleeve. Automating a client to search multiple collections of cat photos is a problem, what I'm saying is that the solution needs to be approached from the perspective of the hypertext constraint (being of the Web), rather than the perspective of a common URI allocation scheme based on a query language (fighting the Web). The problem of mapping hypertext controls into query languages, shouldn't involve the URI as a solution. -Eric
Eric J. Bowman wrote: > There is one way I can think of to use SPARQL queries in a REST app, > which is to POST or PUT a representation as application/sparql-query to > some URI. Dereferencing that URI executes the query as a stored > procedure, returning application/sparql-results+xml by default, but > can also return the original query with Accept: application/sparql- > query. Indeed :) one little question though, what happens when somebody GETs the URI? For example, given such a scenario I'd quite like to send people back some HTML, with a form in it, that allowed them to run test SPARQL queries and get back the "raw results", say by putting the query in a form element and submitting the form. Sound feasible / RESTful? if so, POST/PUT or GET? ps: a little confused after reading the above "one way I can think og to use SPARQL queries in a REST app, which is to POST or.." and the mail you sent immediately before it saying "Please don't confuse a post about how POST isn't unRESTful, as saying that it's ever even remotely OK to use POST as a retrieval method." - I'm probably missing something obvious here, or perhaps a subtlety in interpetation. Best, Nathan
On 2 February 2011 15:40, Eric J. Bowman <eric@...> wrote: > > > Danny Ayers wrote: > > > > For example, if I go to: > > > > http://api.talis.com/stores/bbc-backstage/services/sparql > > > [snip] > > When I go to that page, I see not even a clue about the nature of the > interface, other than that I'll need the out-of-band knowledge of some > query language to use it. Where are the instructions for how to > transition to the next application state, given *any* goal? This is > indeed an RPC endpoint, not a hypertext API. > The query box form is just one way of approaching a SPARQL endpoint - essentially just a debugging tool - and absolutely not typical of the kind of interfaces to be found in systems that use SPARQL. As I said before, it's misleading. I personally consider the behaviour of the query box as being RESTful, but arguing over that particular aspect is really missing the whole point of the endpoint. I've left it a bit late to go through your points one by one, I'll re-read tomorrow. But for now I'll leave you with this to look at: http://reference.data.gov.uk/doc/department/dft The top half of the page should tick at least some of your boxes regarding hypertext. But the work is done by a SPARQL endpoint, as you can see if you scroll to the bottom of the page. Ok, there's a thin presentation layer on top, but basically it's just mapping nice-looking URIs to their ugly SPARQL counterparts, and formatting the ugly results so they look ok in a HTML browser. If you copy & paste the query into the form at: http://services.data.gov.uk/reference/sparql you can see the ugly versions. The pretty/ugly URIs and the pretty/ugly formats are effectively isomorphic, the difference being that the pretty versions are tailored for a regular HTML browser with a human sat in front of it, with the aid of a bit of JSON/Javascript. HTML is a hypertext format by virtue of a user agent (usually a regular Web browser) being able to interpret the links in it as a means to the transfer of state via representations. The same goes for any other format - and the browser isn't the only kind of agent. btw, the link to the endpoint on the Department of Transport page is broken, so I got that URI by looking at the source. Alas there I discovered the form in the page uses POST for the query, which is absolutely inexcusable, especially given that a GET here yields the same results. I'll be having a word with someone about that! (The Linked Data API is still being drafted, see http://code.google.com/p/linked-data-api/ ). Cheers, Danny. -- http://danny.ayers.name
I would say that the proper criticism here isn't that SPARQL isn't RESTful, nor that it should be, but instead that the potentially expensive queries SPARQL enables are simply not suitable across trust boundaries. See (including the comments); http://www.markbaker.ca/blog/2006/08/sparql-useful-but-not-a-game-changer/ Mark.
PS. I discovered the form > in the page uses POST for the query, which is absolutely inexcusable, > especially given that a GET here yields the same results. I'll be having a > word with someone about that! (The Linked Data API is still being drafted, > see http://code.google.com/p/linked-data-api/ ). Issue reported: http://code.google.com/p/linked-data-api/issues/detail?id=10 -- http://danny.ayers.name
On Wed, Feb 02, 2011 at 09:31:39PM -0500, Mark Baker wrote: > I would say that the proper criticism here isn't that SPARQL isn't > RESTful, nor that it should be, but instead that the potentially > expensive queries SPARQL enables are simply not suitable across trust > boundaries. See (including the comments); I think this is a key issue too, and it's something we were aware of when I did the work on openflydata.org in 2009. We explored some ideas around restricting the query language features exposed by an endpoint, to prevent some of the more obvious denial-of-service type vulnerabilities, which was part of the reason why we ended up rolling our own SPARQL protocol implementation [1]. There's a bit more discussion in the paper at [2]. Of course, even with restricted language features, you can still write hard queries if you know how, so this isn't a perfect strategy. What we really wanted to be able to do was be able to place a hard limit on the amount of resources any one query could consume. A simple way to do this might be to kill queries that took longer than X seconds, a la simpledb [3]. My colleage Graham Klyne and I had a chat with Andy Seaborne about doing this with Jena TDB, which wasn't possible at the time, and Andy seemed to think it was doable (?) but there were details around how TDB executed queries and also how TDB's various optimisers worked that I didn't fully understand, and we never had the time to follow this through. I haven't done any recent work on SPARQL, so it may be that there are query engines out there that support this kind of thing out of the box. So I guess I'm saying, you may be right, but I wouldn't discount the viability of open sparql endpoints just yet, I think the jury's still out. Cheers Alistair [1] http://code.google.com/p/sparqlite/ [2] http://dx.doi.org/10.1016/j.jbi.2010.04.004 [3] http://docs.amazonwebservices.com/AmazonSimpleDB/latest/DeveloperGuide/index.html?SDBLimits.html -- Alistair Miles Head of Epidemiological Informatics Centre for Genomics and Global Health <http://cggh.org> The Wellcome Trust Centre for Human Genetics Roosevelt Drive Oxford OX3 7BN United Kingdom Web: http://purl.org/net/aliman Email: alimanfoo@... Tel: +44 (0)1865 287669
On 3 February 2011 03:31, Mark Baker <distobj@...> wrote: > > > I would say that the proper criticism here isn't that SPARQL isn't > RESTful, nor that it should be, but instead that the potentially > expensive queries SPARQL enables are simply not suitable across trust > boundaries. See (including the comments); > > http://www.markbaker.ca/blog/2006/08/sparql-useful-but-not-a-game-changer/ > On : "SPARQL is likely not going to enable new kinds of applications" - it's hard to disagree. However it does enable new ways of constructing applications, and that, while maybe not a game changer in the big picture, is a good step forward. Even if it was straight RPC, being able to access the data directly is very useful (I don't believe it is RPC, the query URIs are effectively just a systematic convention for mapping of URIs to the information space). Examples of new ways of constructing applications are presented in Leigh Dodds' screencasts at: http://www.talis.com/platform/demos/ To save sitting through them (though they are watchable :) the BBC Data screencast onscreen there describes a simple app to browse data relating to reviews and reviewers built using the following approach: * public data from the BBC site is harvested and placed in an online triplestore (note *not* screen-scraped, they publish machine-friendly RDF with each page in their Music section) * certain preset queries are created for patterns of interest in the new application * those queries are given a browser/user-friendly facade Having said all that, the trust boundary question is a big one. I don't think the problem is necessarily to do with expensive queries - for the majority of applications the shape of the required data and suitable sources will be know in advance, open-ended querying (/crawling) isn't necessary. The appropriate data can be harvested (via site crawling or through running CONSTRUCT queries if a SPARQL endpoint is available), filtered (probably again using CONSTRUCT) and cached in a secondary store which will act as a cache. A slightly harder part is the issue of combining data from multiple diverse sources whilst retaining adequate provenance information to support 'trust management'. Most current setups are geared towards fairly large named graphs, rather than the little bitty ones you'd need for fine-grained processing. But quite a lot of work has already been done on this general area (there was a W3C incubator group, report: http://www.w3.org/2005/Incubator/prov/XGR-prov-20101214/). A much harder part is what you get when you throw access control into the mix. While some of the individual technologies seem to have a lot of potential (notably FOAF+SSL WebID : http://esw.w3.org/WebID), I don't think there's a compelling story yet on how they would work with multiple diverse data sources, which may each have their own authentication/authorization requirements. Cheers, Danny. -- http://danny.ayers.name
Mark Baker wrote: > I would say that the proper criticism here isn't that SPARQL isn't > RESTful, nor that it should be, but instead that the potentially > expensive queries SPARQL enables are simply not suitable across trust > boundaries. See (including the comments); > > http://www.markbaker.ca/blog/2006/08/sparql-useful-but-not-a-game-changer/ this supposes that a SPARQL query engine is always positioned on the server side - it's positioned at the edges of the network, sometimes on the client (backed by an HTTP cache and conditional GETs), sometimes it's a shared client accessible by HTTP GETs (w/ optional caching) and often accessible by "normal" URIs (no query string or the like), and sometimes "on the server". this supposes that it's used for private data over a public interface - that's orthogonal, when people have private data their they can secure access by any kind of auth* / use HTTP+TLS. Similarly, often the data /is/ open public data, there's a term.. "linked *open* data" Now, I'm not saying "SPARQL" is perfect, but it's completely orthogonal to REST - how you implement, position and expose SPARQL is not though, but don't think for a second that "sparql is always on the server side, never cached, always unsecured and always uses post" because that's completely wrong. Best, Nathan
Danny Ayers wrote: > A much harder part is what you get when you throw access control into the > mix. While some of the individual technologies seem to have a lot of > potential (notably FOAF+SSL WebID : http://esw.w3.org/WebID), I don't think > there's a compelling story yet on how they would work with multiple diverse > data sources, which may each have their own authentication/authorization > requirements. Danny, you simply place the query engine on the client side, operating over a web of linked data, using conditional GETs and HTTP caching, each data source can be ACL controlled in a granular fashion that way, and it's /very/ network friendly :) ps: not many people do this currently, but you can, virtuoso for example enables this, and there are js implementations in the works to encourage this pattern. (full read write ACL controlled web of linked data is still at proto stage) Best, Nathan
On 3 February 2011 10:21, Nathan <nathan@...> wrote: > Danny, you simply place the query engine on the client side, operating over > a web of linked data, using conditional GETs and HTTP caching, each data > source can be ACL controlled in a granular fashion that way, and it's /very/ > network friendly :) A very point, a lot of good stuff can be done in a client (agent), although it's a slightly unfamiliar pattern for developers used to seeing the browser as the only client. (My personal view of a 21st century intelligent agent - agent in the AI sense - is that of a relatively stupid little unit composed of a HTTP client, (access from) a HTTP server a little bit of code/wiring to express its business rules). My reservation on the ACL front would be that currently a great deal of per-source manual configuration would be required to set something like this up (and modify it as requirements and sources evolve). > ps: not many people do this currently, but you can, virtuoso for example > enables this, and there are js implementations in the works to encourage > this pattern. (full read write ACL controlled web of linked data is still at > proto stage) Right. I wasn't aware of how far Virtuoso had got on this, but it's good to hear that work is in progress. Cheers, DAnny. -- http://danny.ayers.name
Hi,
at all: I thought we pointed already most of the mentioned parts out a
bit earlier, or? Cf.
- http://tech.groups.yahoo.com/group/rest-discuss/message/17279
- http://tech.groups.yahoo.com/group/rest-discuss/message/17258
- http://tech.groups.yahoo.com/group/rest-discuss/message/17264
- http://tech.groups.yahoo.com/group/rest-discuss/message/17266
;)
Apologies for repeating myself here, but I think we shouldn't go round
in circles, or? - Although, yes such conversation is always a bit
difficult. It's hard to deliver the intended meaning of a message from
its sender to its receiver(s).
Anyway, thanks a lot for having that nice discussion here.
Cheers,
Bob
PS: please keep attention to the referenced sources in the posts
PPS: the third reference in the second post should be
http://purl.org/ontology/is/core# or
http://infoserviceonto.smiy.org/2010/06/22/welcome/ ;) ("legacy system"
(existing non-Semantic-Web system) provenance information is crucial at
the moment)
Am 03.02.2011 10:41, schrieb Danny Ayers:
> On 3 February 2011 10:21, Nathan<nathan@...> wrote:
>
>> Danny, you simply place the query engine on the client side, operating over
>> a web of linked data, using conditional GETs and HTTP caching, each data
>> source can be ACL controlled in a granular fashion that way, and it's /very/
>> network friendly :)
>
> A very point, a lot of good stuff can be done in a client (agent),
> although it's a slightly unfamiliar pattern for developers used to
> seeing the browser as the only client. (My personal view of a 21st
> century intelligent agent - agent in the AI sense - is that of a
> relatively stupid little unit composed of a HTTP client, (access from)
> a HTTP server a little bit of code/wiring to express its business
> rules).
>
> My reservation on the ACL front would be that currently a great deal
> of per-source manual configuration would be required to set something
> like this up (and modify it as requirements and sources evolve).
>
>> ps: not many people do this currently, but you can, virtuoso for example
>> enables this, and there are js implementations in the works to encourage
>> this pattern. (full read write ACL controlled web of linked data is still at
>> proto stage)
>
> Right. I wasn't aware of how far Virtuoso had got on this, but it's
> good to hear that work is in progress.
Bob Ferris wrote: > at all: I thought we pointed already most of the mentioned parts out a > bit earlier, or? Cf. Yes but not everybody has time to read lots of long posts, so a quick summary can sometimes suffice, together with taking forks in discussion off list. Nothing wrong with that. It is good to have fuller discussion on the issues referenced for the archives and anybody following though :) Best, Nathan ps: yes, I'm noting the irony in that I can often write loads of long posts in quick succession ;)
Nathan wrote: > > Eric J. Bowman wrote: > > There is one way I can think of to use SPARQL queries in a REST app, > > which is to POST or PUT a representation as > > application/sparql-query to some URI. Dereferencing that URI > > executes the query as a stored procedure, returning > > application/sparql-results+xml by default, but can also return the > > original query with Accept: application/sparql- query. > > Indeed :) one little question though, what happens when somebody GETs > the URI? > "Dereferencing that URI executes the query..." > > For example, given such a scenario I'd quite like to send people back > some HTML, with a form in it, that allowed them to run test SPARQL > queries and get back the "raw results", say by putting the query in a > form element and submitting the form. Sound feasible / RESTful? if > so, POST/PUT or GET? > You're confusing me. ;-) My scenario executes one stored SPARQL query (by default, unless Accept: application/sparql-query) at a fixed URI. POST or PUT can create that URI (depending on whether the user-agent or the origin server assigns the URI), by uploading the query as a file. PUT may be used to replace the query, i.e. edit the file. Note that PUT will only Allow: application/sparql-query -- you can't edit the result, PUT that back, and expect the server to reformulate the query. Standard REST design pattern, I've used it with PHP, XQuery, SSJS, JSP, ASP... not always self-descriptively, as PHP etc. lack media types, and always access-restricted for methods other than GET/HEAD/OPTIONS. You're saying you want GET to return HTML results with a form? Fine, add text/html to the conneg mix, return the results with a form in that representation, pre-fill the textarea with the current raw SPARQL query, and instruct the user-agent to PUT application/sparql-query to the URI upon submission. > > ps: a little confused after reading the above "one way I can think og > to use SPARQL queries in a REST app, which is to POST or.." and the > mail you sent immediately before it saying "Please don't confuse a > post about how POST isn't unRESTful, as saying that it's ever even > remotely OK to use POST as a retrieval method." - I'm probably > missing something obvious here, or perhaps a subtlety in > interpetation. > Yes, there's a nuance here that will lead some folks to believe that my example is no different than Mike's and resembles the SPARQL endpoint I'm griping about, and conclude that I've contradicted myself when I haven't. I'm not using POST to execute queries, SPARQL syntax hasn't leaked out into my URIs, and my API is *somewhat* self-documenting, in that the query isn't entirely opaque when presented with the results it generates. -Eric
Hi Alistair, On Thu, Feb 3, 2011 at 3:52 AM, Alistair Miles <alimanfoo@...> wrote: > What we really wanted to be able to do was be able to place a hard limit > on the amount of resources any one query could consume. A simple way to do > this might be to kill queries that took longer than X seconds, a la simpledb > [3]. My colleage Graham Klyne and I had a chat with Andy Seaborne about doing > this with Jena TDB, which wasn't possible at the time, and Andy seemed to > think it was doable (?) but there were details around how TDB executed queries > and also how TDB's various optimisers worked that I didn't fully understand, > and we never had the time to follow this through. I haven't done any recent > work on SPARQL, so it may be that there are query engines out there that > support this kind of thing out of the box. That would certainly work in the sense of bringing the cost down, but it would be a shadow of a SPARQL endpoint from the point of view of client expectations, no? Its proper functioning would be dependent on far too many variables that the client has no control over. That could be remedied by the publisher documenting a set of queries which it can guarantee will complete in a reasonable time, because it has optimized specifically for them (indexes, caching, etc..) ... but then that's exactly what they'd be doing if they put an HTTP interface in front of that data. > So I guess I'm saying, you may be right, but I wouldn't discount the viability > of open sparql endpoints just yet, I think the jury's still out. I'd be happy to be proven wrong because it would clearly be awesome to be able to use SPARQL over the 'net. Alas, everything I know about the Web tells me I'm not. Mark.
Hi Mark, On Thu, Feb 03, 2011 at 03:25:42PM -0500, Mark Baker wrote: > Hi Alistair, > > On Thu, Feb 3, 2011 at 3:52 AM, Alistair Miles <alimanfoo@...> wrote: > > What we really wanted to be able to do was be able to place a hard limit > > on the amount of resources any one query could consume. A simple way to do > > this might be to kill queries that took longer than X seconds, a la simpledb > > [3]. My colleage Graham Klyne and I had a chat with Andy Seaborne about doing > > this with Jena TDB, which wasn't possible at the time, and Andy seemed to > > think it was doable (?) but there were details around how TDB executed queries > > and also how TDB's various optimisers worked that I didn't fully understand, > > and we never had the time to follow this through. I haven't done any recent > > work on SPARQL, so it may be that there are query engines out there that > > support this kind of thing out of the box. > > That would certainly work in the sense of bringing the cost down, but > it would be a shadow of a SPARQL endpoint from the point of view of > client expectations, no? Its proper functioning would be dependent on > far too many variables that the client has no control over. > > That could be remedied by the publisher documenting a set of queries > which it can guarantee will complete in a reasonable time, because it > has optimized specifically for them (indexes, caching, etc..) ... but > then that's exactly what they'd be doing if they put an HTTP interface > in front of that data. That's a good point. But I do wonder if there is still a middle-ground worth a bit of exploration. By that I mean, for any given endpoint, based on my experience with Jena TDB and the FlyBase dataset (~180m triples) [1] there will probably still be quite a large space of possible queries that will execute with low cost even without the service making indexing or caching optimisations specific to the data. E.g., try doing the example search at http://openflydata.org/flyui/build/apps/expressionbygenebatch/ with firebug open to see the underlying SPARQL queries. These queries are not trivial but usually complete in less than a few seconds (some are sub-second), and the endpoints are all hosted on a modest EC2 m1.small instance. No specific optimisations were made for any of these endpoints, beyond the use of the generic TDB statistics-based optimiser. (In case you were wondering, those are fruit fly embryos and testes you're looking at :) Don't get me wrong, I'm not trying to claim SPARQL will revolutionise the Web, and I haven't done any work with SPARQL since 2009, so I have nothing invested in it. But I do wonder if, for people who have an interesting dataset that they'd like to share with others, exposing their dataset via a SPARQL endpoint would be worthwhile, even if they limited resource usage. I.e., the data publisher would say, "here's my data, you can execute x queries per second, any queries longer than y seconds will get killed, here's some statistics about the data to help you figure out what's there, go explore". Any third parties then interested in re-using the data could try a few sparql queries to see if they were efficient, and if so, query the SPARQL endpoint directly in their mashup/... application. If the queries they needed turned out not to be so efficient, then they could begin a dialogue with the data provider about an HTTP interface that is optimised for a particular set of requirements, or they could harvest (via the SPARQL endpoint?), cache and index the data they need and do their own optimisation. I think this is especially interesting where a dataset is widely applicable. E.g., FlyBase stores reference data on the fruit fly genome, which is central to genomic research in Drosophila and is re-used in an extremely diverse range of applications. For FlyBase, they serve their community better by providing more flexible interfaces to their data, because they cannot possibly predict all requirements. (In fact, FlyBase currently provide an SQL endpoint on a best-effort basis, which anyone can use if you know where to find it.) And providing a query endpoint is a nice way of lowering the costs for third parties re-using your data. E.g., if it's a SPARQL endpoint, then re-using the data is as simple as writing a bit of Python/Perl/..., or putting some HTML and javascript on a web server, very low infrastructure costs or complexity. This is more relevant where there are lots of small-scale data re-users that don't have access to hosting infrastructure, which is particularly the case in biological research. Having said all that, I've found a fair few queries that SPARQL is just crap at, and will never work without data-specific indexing and caching. Graham Klyne got some funding recently work with the Jena guys to look at extending SPARQL endpoints with multiple data-specific indexes [2], that could be interesting, and I'm sure others are working in the same space. A few shades of grey worth talking about here? Cheers, Alistair [1] http://code.google.com/p/openflydata/wiki/FlyBaseMilestone3 [2] http://code.google.com/p/milarq/ -- Alistair Miles Head of Epidemiological Informatics Centre for Genomics and Global Health <http://cggh.org> The Wellcome Trust Centre for Human Genetics Roosevelt Drive Oxford OX3 7BN United Kingdom Web: http://purl.org/net/aliman Email: alimanfoo@... Tel: +44 (0)1865 287669
The Talis Platform (http://talis.com/platform) is a Software as a Service system providing SPARQL-capable RDF stores alongside stores for arbitrary content (HTML, blobs, whatever). It's particularly relevant here because access to the Platform is solely through HTTP (and I believe generally RESTful). I'm not up-to-date on developments so I sent a ping on Twitter, Sam (cc'd) responded: [[ At the moment, the query throttling in the Platform works rather naively. Just like Alistair with openflydata, we've restricted some language features. Some of this is explicit, some SPARQL1.1 stuff we have enabled (like aggregates etc), others disabled (property paths). Also, we don't support any extension or property functions at the moment, so I guess that restricts quite a lot of *potential* functionality. We also do the other thing that Alistair talks about in the mail thread, we time how long each query is taking to execute and terminate those that run over a certain threshold currently 30 seconds. Obviously, this is at best crude and is something we'd like to improve. One major shortcoming currently is that a terminated query returns just an error response, no results. A patch has recently been submitted to ARQ to allow terminated queries to at least return the results they've already got, Paolo is working on getting that accepted now. I know its a bit apples & oranges, but do you think the SPARQL Uniform HTTP Protocol[3] is relevant to the thread on rest-discuss? ... [1] http://twitter.com/#!/danja/statuses/33094970680283136 [2] https://issues.apache.org/jira/browse/JENA-29 [3] http://www.w3.org/TR/sparql11-http-rdf-update/ ]] -- http://danny.ayers.name
Hi, I recently got aware of Fuseki[1], a SPARQL server. The claim on the wiki is: "It provides the REST-style SPARQL HTTP Update, and SPARQL Query and SPARQL Update using the SPARQL protocol over HTTP." I think the "hypermedia as the engine of application state" constraint is not fulfilled, or? Cheers, Bob [1] http://openjena.org/wiki/Fuseki
Hi All, It's been asserted that the following is RESTful, so I thought it may be more beneficial to get the REST communities feedback on this particular 'REST API'. Hugh Glaser wrote: > I don't know if it is written anywhere, but there seems to me a bit of a consensus around this. > And it is folded into the RESTful stuff. > So for example > http://kmi-web05.open.ac.uk/REST_API.html > describes a typical service invocation with a URI as argument as: Would anybody like to comment on whether they feel this is RESTful, noting any REST mismatches they can see. Best, Nathan
Hi Nathan. I personally dislike a little anything that uses URI templating. In this case, always open to discussion, we have several URIs each representing one "service". You can either thing of it as one app in which you need to know the URIs beforehand to use it (or compose them like a template), or several apps, each one with one URI, that only return one particular value. I guess the second part does not violate any rule, still not sure we are taking advantage of all REST can provide. Maybe it is not needed. Maybe the API is simply not that rich. Cheers. William. --- In rest-discuss@yahoogroups.com, Nathan <nathan@...> wrote: > > Hi All, > > It's been asserted that the following is RESTful, so I thought it may be > more beneficial to get the REST communities feedback on this particular > 'REST API'. > > Hugh Glaser wrote: > > I don't know if it is written anywhere, but there seems to me a bit of a consensus around this. > > And it is folded into the RESTful stuff. > > So for example > > http://kmi-web05.open.ac.uk/REST_API.html > > describes a typical service invocation with a URI as argument as: > > Would anybody like to comment on whether they feel this is RESTful, > noting any REST mismatches they can see. > > Best, > > Nathan >
Hi All, I know the theoritical benifits of REST Services. What I want to know in particular is What are the motivations for migrating services to REST approach (if there is any)? How can a service provide know if s/he can migrate his service to RESt approach? Are there any tool that helps to migrate SOAP-based services to REST services, if it is wise to do so? woulld appreciate if someone points some acedemic papers in these matters.
On Feb 4, 2011, at 3:26 PM, Nathan wrote: > Hi All, > > It's been asserted that the following is RESTful, so I thought it may be > more beneficial to get the REST communities feedback on this particular > 'REST API'. > > Hugh Glaser wrote: >> I don't know if it is written anywhere, but there seems to me a bit of a consensus around this. >> And it is folded into the RESTful stuff. >> So for example >> http://kmi-web05.open.ac.uk/REST_API.html >> describes a typical service invocation with a URI as argument as: > > Would anybody like to comment on whether they feel this is RESTful, > noting any REST mismatches they can see. It is not RESTful because the URIs to use are provided at design time (as a service description) while they should be discovered via hypermedia at runtime. If anything is described about a particular service (instead of being defined in a media type and or Link rel spec) the design cannot be RESTful. Yet in other words: In a RESTful design, there is *no* coupling whatsoever between clients and particular servers. All contract is in the global specs (HTTP, URI, media types) Jan > > Best, > > Nathan > > > ------------------------------------ > > Yahoo! Groups Links > > >
Bob Ferris wrote: > > I think the "hypermedia as the engine of application state" > constraint is not fulfilled, or? > Well, that's one of 'em. In REST circles, you'll encounter the phrase "HTTP != REST" which means that REST is a subset of the things you can do with HTTP, not the entire set of things you can do with HTTP. 99% of "REST APIs" out there are really HTTP APIs, all hail the power of the buzzword... -Eric
Eric J. Bowman wrote: > Bob Ferris wrote: >> I think the "hypermedia as the engine of application state" >> constraint is not fulfilled, or? >> > > Well, that's one of 'em. In REST circles, you'll encounter the phrase > "HTTP != REST" which means that REST is a subset of the things you can > do with HTTP, not the entire set of things you can do with HTTP. 99% > of "REST APIs" out there are really HTTP APIs, all hail the power of > the buzzword... and weirdly HTTP is not a superset of REST, as in you can't do everything with HTTP that REST indicates you could/should (the mismatches) fair comment?
Nathan wrote: > > Would anybody like to comment on whether they feel this is RESTful, > noting any REST mismatches they can see. > Sigh. HTTP != REST. This is a nonstarter. Makes a fine HTTP API, if only it would call itself that. -Eric
Limiting myself to one constraint, self-descriptive messaging. In the uniform interface, a list of links is presented using markup which unambiguously declares what a link is. This service returns either: text/plain -- no semantics whatsoever in the media type application/json -- no semantics for 'link' in the media type application/xml -- link semantics exist, but don't include the <SemanticContent> element. Off the top of my head, I can think of two universal elements in the ubiquitous text/html media type -- <link> and <a> -- which are used to uniformly express the semantics of "contains a URI" to anyone and everyone. A list of <a> links could be wrapped inside <ul><li> to uniformly declare the response as containing a list of hypertext links. -Eric
Nathan wrote: > > and weirdly HTTP is not a superset of REST, as in you can't do > everything with HTTP that REST indicates you could/should (the > mismatches) > > fair comment? > Absolutely. But, assuming a RESTful design, HTTP could be replaced in the future with Waka, HTTP 2, or whatever -- clearing up those mismatches inherent in HTTP 1.1 without actually changing the API. -Eric
> > Limiting myself to one constraint, self-descriptive messaging. In the > uniform interface, a list of links is presented using markup which > unambiguously declares what a link is. > Or, use the Link header, nowadays... > > Off the top of my head, I can think of two universal elements in the > ubiquitous text/html media type -- <link> and <a> -- which are used to > uniformly express the semantics of "contains a URI" to anyone and > everyone. > Actually, it's their @href which imparts those semantics... -Eric
Now, if there's some sort of processing model we're dealing with here beyond just recognizing how to read URIs from a document, which isn't described by any existing media type, we'd have a situation where defining a new media type is called for. Self-descriptiveness would be satisfied by registering something like application/vnd.foo+xml, which could even be considered a standard, provided it isn't intended to transit the public Internet. If it is targeted at the Web, in order to truly be considered part of a uniform interface, a standards-tree registration is required -- backed up by a public standard as opposed to a vendor specification. But, in this case, any media type capable of expressing a list of links will do -- the more ubiquitous, the more RESTful, which is why text/ html isn't a bad choice. -Eric
Eric J. Bowman wrote: > Now, if there's some sort of processing model we're dealing with here > beyond just recognizing how to read URIs from a document, which isn't > described by any existing media type, we'd have a situation where > defining a new media type is called for. > > Self-descriptiveness would be satisfied by registering something like > application/vnd.foo+xml, which could even be considered a standard, > provided it isn't intended to transit the public Internet. If it is > targeted at the Web, in order to truly be considered part of a uniform > interface, a standards-tree registration is required -- backed up by a > public standard as opposed to a vendor specification. > > But, in this case, any media type capable of expressing a list of links > will do -- the more ubiquitous, the more RESTful, which is why text/ > html isn't a bad choice. I'm quite sure that one can do everything needed with: <link ?@id @rel @href> <meta ?@id @property @content> then if one reduced and combined those two to say: <desc ?@id @rel @value> // where the semantics were described by the rel property used you'd be a long way there.. in fact you could just drop the <desc such that it was: @id @rel @value . and one could probably reduce that further to be @id @rel @value, @value ; @rel @value . or similar.. it appears that would cover everything possible.. dot dot dot
On Fri, Feb 4, 2011 at 8:14 PM, Nathan <nathan@...> wrote: > @id @rel @value . > > and one could probably reduce that further to be > > @id @rel @value, @value ; > @rel @value . > > My god. It can "describe anything"! We should cal it something.. It describes resources... Hmm. We need a catchy name.... Anyone? -- -mogsie-
On Fri, Feb 4, 2011 at 5:22 AM, Bipin U <bipin_upd@...> wrote: > What are the motivations for migrating services to REST approach > (if there is any)? > Remember what REST is. It's an architectural style for a distributed hypermedia system[1]. If you're doing something that you can't see as a distributed hypermedia system, then adopting REST might not be a good approach. If you somehow find a holistic view of your system that _is_ a distributed hypermedia system, then maybe REST is a good approach. > Are there any tool that helps to migrate SOAP-based services to REST > services, if it is wise to do so? > Probably not, and any tools that to exist that say they do are in the large percentile of engineers that think that REST is the same as HTTP. Every SOAP based service has its own architectural style which almost never coincides with the architectural style of another SOAP based service, and even less frequently coincides with the REST architectural style. Making a transition from _any_ architecture to something with a REST style involves a lot of rethinking interactions, roles of the components in the system, the size and granularity of your domain entities, applying the constraints of REST to your problem. No tool can do this other than tools that help you understand the problem domain. If, however you want to just slap together a HTTP API, then there are several tools out there. Just don't call them REST on this list. [1]: Fielding's thesis, chapter 5, first sentence. I can't stand another link to ics.uci on this list. sorry. -- -mogsie-
Am 04.02.2011 20:14, schrieb Nathan: > I'm quite sure that one can do everything needed with: > > <link ?@id @rel @href> > <meta ?@id @property @content> > > then if one reduced and combined those two to say: > > <desc ?@id @rel @value> > // where the semantics were described by the rel property used > > you'd be a long way there.. in fact you could just drop the<desc such > that it was: > > @id @rel @value . > > and one could probably reduce that further to be > > @id @rel @value, @value ; > @rel @value . > > or similar.. it appears that would cover everything possible.. > > dot dot dot Doh! Should this be a hint for using triples instead? ;) Am 04.02.2011 18:32, schrieb Eric J. Bowman: > Well, that's one of 'em. In REST circles, you'll encounter the phrase > "HTTP != REST" which means that REST is a subset of the things you can > do with HTTP, not the entire set of things you can do with HTTP. 99% > of "REST APIs" out there are really HTTP APIs, all hail the power of > the buzzword... Yes, that's the case and somehow an (mainly) accepted mistake in reality, or? I guess it's even harder to try to convince the majority from the opposite. So the $1-million-question is (maybe asked periodical here all the time): Which service (or in general application) is 100% RESTful? Please name me one service (I tried all the time hard to find one; even in the "non-RESTful"-blog from Roy T. Fielding, this question was asked several times in the comments; however, without a response). Is this even possible? Since even the "referential implementation", the Web (especially the HTTP protocol as part of it), can't satisfy currently such constraints. Do we really need the "hypermedia as the engine of application state" for services? Is it highly responsible for "user-perceived performance" (latency, which should be minimum as possible). Is "hypermedia as the engine of application state" only a feature for web-browser-like applications? However, there are much more applications out there that communicate nicely via the Web. Would e.g. XAML be an option that can fulfil that constraint? Is the set of REST constraints as a whole maybe overrated to emphasize "scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems"? All the so-called "RESTful APIs" live mainly without the "hypermedia as the engine of application state" and can often scale quite well by delivering a "user-perceived performance". Isn't it often even more the design and utilization of the server-side hardware and the Internet connection that is responsible for the scalability? So, if I create an application that is fully REST compatible, does this include a "guarantee" for emphasizing the above stated features? I really like true-REST. However, the question is still: do we really need it? Since not even "all components of the deployed Web obey every constraint present in" the REST architectural design. Cheers, Bob
Erik Mogensen wrote: > On Fri, Feb 4, 2011 at 8:14 PM, Nathan <nathan@...> wrote: > >> @id @rel @value . >> >> and one could probably reduce that further to be >> >> @id @rel @value, @value ; >> @rel @value . >> >> > My god. It can "describe anything"! We should cal it something.. It > describes resources... Hmm. We need a catchy name.... Anyone? descriptalot?
Bob Ferris wrote: > > I really like true-REST. However, the question is still: do we really > need it? Since not even "all components of the deployed Web obey > every constraint present in" the REST architectural design. > I'll probably write a more detailed response over the weekend. In short, REST is an idealized model of distributed hypertext system behavior. Think of it like a dog-breeding standard. Sure, a Malamute is a dog, but not if it has blue eyes... The important bit, excerpted from Chapter 6, all-caps mine: " The name 'Representational State Transfer' is intended to evoke an image of how a well-designed Web application behaves: a network of web pages (a virtual state-machine), where the user progresses through the application by selecting links (state transitions), resulting in the next page (representing the next state of the application) being transferred to the user and rendered for their use. REST is not intended to capture all possible uses of the Web protocol standards. There are applications of HTTP and URI that do not match the application model of a distributed hypermedia system. The important point, however, is that REST DOES CAPTURE _ALL_ OF THOSE ASPECTS OF A DISTRIBUTED HYPERMEDIA SYSTEM THAT ARE CONSIDERED CENTRAL to the behavioral and performance requirements of the Web, such that optimizing behavior within the model will result in optimum behavior within the deployed Web architecture. " REST is not a result, it's a tool, which is why nobody gets an answer when asked for a link to a RESTful service -- that's like asking for an example of the perfectly-conforming Malamute. All I can tell you about any individual Malamute, is where it meets or deviates from the various points of breed. Or whether it's so far off as to make it a Beagle, which is still HTTP... erm, uhhh... I mean, a dog. You can hook any sort of dog to a sled, but if you want dogs with the proper size, weight, eye pigmentation, coat characteristics and so forth such that it can reliably perform the task over the long-term in 40-below weather, then you'll need a dog that's been bred for those advantages. (Huskies are bred to slightly different dog-sledding needs, making a tradeoff on eye pigmentation acceptable.) You can pull a dogsled with Huskies, you just can't call them Malamutes, is the equivalent of "HTTP != REST". No Malamute is representative of that standard, it's just a guideline -- all Malamutes are merely implementations of that standard. Same thing here. I often use Google as an example, describing what it gets right and wrong in terms of REST, but I don't hold it up as an example *of* REST. But, aren't Huskies just as good at pulling sleds as Malamutes? Depends on the application. Malamutes aren't used by top Iditarod racers, but if you need to deliver vaccine to Nome, you'll want Balto... http://www.worldmals.com/history.htm -Eric
Am 04.02.2011 23:48, schrieb Eric J. Bowman: > > I'll probably write a more detailed response over the weekend. In > short, REST is an idealized model of distributed hypertext system > behavior. Yes, I'm absolutely aware of it. However, can we reach that "ideal" fully, or do we only try approximate it all the time? How can we then state that when we would reach this "ideal", that exactly this "ideal" can emphasize the propagated features? Don't we need an exemplification? I think we can lead here this discussion even more into philosophy. Although, I guess, this is not really necessary here and not intended by the aim of this mailing list. > > The important bit, excerpted from Chapter 6, all-caps mine: > > " > The name 'Representational State Transfer' is intended to evoke an > image of how a well-designed Web application behaves: a network of web > pages (a virtual state-machine), where the user progresses through the > application by selecting links (state transitions), resulting in the > next page (representing the next state of the application) being > transferred to the user and rendered for their use. > > REST is not intended to capture all possible uses of the Web protocol > standards. There are applications of HTTP and URI that do not match the > application model of a distributed hypermedia system. The important > point, however, is that REST DOES CAPTURE _ALL_ OF THOSE ASPECTS OF A > DISTRIBUTED HYPERMEDIA SYSTEM THAT ARE CONSIDERED CENTRAL to the > behavioral and performance requirements of the Web, such that > optimizing behavior within the model will result in optimum behavior > within the deployed Web architecture. > " > > REST is not a result, it's a tool, which is why nobody gets an answer > when asked for a link to a RESTful service It's bad, when one gets no answer. It's like that I can claim everything and if someone asks why I claim this, then I would answer that I don't have to answer you that or simple don't answer at all (no answer is also an answer). Maybe hypermedia/hypertext is now an old hat and we have now entered other dimensions of interaction design. So if there are applications of HTTP and URI that do match the application model of a distributed hypermedia system (which is obviously the case), then there might be at least one which fulfils the constraints of REST. Otherwise, how does we know that "REST does capture all of those aspects of a distributed hypermedia system that are considered central to the behavioral and performance requirements of the Web". Anyway, I tend now to be a bit more sceptical about the implementation of the whole set of architectural constraints of REST. Btw, "while mismatches cannot be avoided in general, it is possible to identify them before they become standardized". Non-RESTful REST implementations are today a kind of de-facto standard, or? Cheers, Bob
OData seems to have taken a good stab at defining the structure of URIs among other things.. http://www.odata.org/developers/protocols/uri-conventions On Fri, Feb 4, 2011 at 12:57 PM, Eric J. Bowman <eric@...>wrote: > > > Now, if there's some sort of processing model we're dealing with here > beyond just recognizing how to read URIs from a document, which isn't > described by any existing media type, we'd have a situation where > defining a new media type is called for. > > Self-descriptiveness would be satisfied by registering something like > application/vnd.foo+xml, which could even be considered a standard, > provided it isn't intended to transit the public Internet. If it is > targeted at the Web, in order to truly be considered part of a uniform > interface, a standards-tree registration is required -- backed up by a > public standard as opposed to a vendor specification. > > But, in this case, any media type capable of expressing a list of links > will do -- the more ubiquitous, the more RESTful, which is why text/ > html isn't a bad choice. > > -Eric > >
Am 04.02.2011 20:07, schrieb Vivek Vaid: > > > OData seems to have taken a good stab at defining the structure of URIs > among other things.. > > http://www.odata.org/developers/protocols/uri-conventions I guess, even OData is not more RESTful than any other so-called "RESTful" service. I may want even go further that web services in their nature maybe do not need to fulfil the "hypermedia as the engine of application state"-constraint, or? The UI stuff would mostly realized via another component (even if this component a web site) of the application that consumes that web services. So why bothering all the time about non-RESTful REST approximations. I can imagine that it hurts when someone applies a term where this term is not really appropriate. However, that is a quite natural thing in our society that happens all time because. We can not assume that everyone who uses a term is so educated that he/she can be sure in applying the term in a correct way. Curiously, we often understand each other even when utilizing such misconceptions (application of a term in a wrong way). I think also Roy T. Fielding maybe stopped bothering meanwhile about this, or? We can't really reject that non-RESTful utilizing when categorizing a service (or whatever) as RESTful. That's life ;) Cheers, Bob
It is often stated, that RESTful services decouples client and server, as e.g. stated here [1]: "Coupling between client and server is removed, server owners need not know about client particularities to evolve the servers without breaking clients." But i think, the most server changes will break even the RESTfuls´ clients. At least in business scenarios: 1. Think about changing your application protocol due to business changes. Can a client be generic enough to compensate this? The link rel semantics of "next" are very clear. But what if i need a relation type, which is not yet described? Which is too domain specific. 2. What about the cross-cutting concerns like security? If i switch from HTTP Basic auth to OAuth, can a generic client adapt to this situation automatically? 3. What if i have to evolve a media type, which i´m using, and i need a new one? (for example by adding new link relations or data fields) In my opinion, i don´t see a business value in supporting 100% REST style. It might work with "simple" application protocols like ATOM (which is very nice, but also very generic). Maybe someone can enlighten me... [1] http://nordsc.com/ext/classification_of_http_based_apis.html
Jakob Strauch wrote: > It is often stated, that RESTful services decouples client and server, as e.g. stated here [1]: > > "Coupling between client and server is removed, server owners need not know about client particularities to evolve the servers without breaking clients." > > But i think, the most server changes will break even the RESTfuls� clients. At least in business scenarios: > > 1. Think about changing your application protocol due to business changes. Can a client be generic enough to compensate this? The link rel semantics of "next" are very clear. But what if i need a relation type, which is not yet described? Which is too domain specific. Yes create new relation types > 2. What about the cross-cutting concerns like security? If i switch from HTTP Basic auth to OAuth, can a generic client adapt to this situation automatically? OAuth is not stateless and not RESTful, you separate the concerns by layering on auth and security, for example by using HTTP+TLS and doing auth at the TLS layer / before hitting the transfer protocol. Or, you send the auth credentials with every request (such as with HTTP *** Auth) > 3. What if i have to evolve a media type, which i�m using, and i need a new one? (for example by adding new link relations or data fields) Make it generic enough to be evolved, you can do this by using relations heavily, or by using a well defined media type like HTML. > In my opinion, i don�t see a business value in supporting 100% REST style. It might work with "simple" application protocols like ATOM (which is very nice, but also very generic). > > Maybe someone can enlighten me... I think the key problem here is.. '' Like most architectural choices, the stateless constraint reflects a design trade-off. The disadvantage is that it may decrease network performance by increasing the repetitive data (per-interaction overhead) sent in a series of requests, since that data cannot be left on the server in a shared context. In addition, placing the application state on the client-side reduces the server's control over consistent application behavior, since the application becomes dependent on the correct implementation of semantics across multiple client versions. '' '' All REST interactions are stateless. That is, each request contains all of the information necessary for a connector to understand the request, independent of any requests that may have preceded it. This restriction accomplishes four functions: 1) it removes any need for the connectors to retain application state between requests, thus reducing consumption of physical resources and improving scalability; 2) it allows interactions to be processed in parallel without requiring that the processing mechanism understand the interaction semantics; 3) it allows an intermediary to view and understand a request in isolation, which may be necessary when services are dynamically rearranged; and, 4) it forces all of the information that might factor into the reusability of a cached response to be present in each request. '' '' The application state is controlled and stored by the user agent and can be composed of representations from multiple servers. '' You've got your application state on the server, not the client. Thus, not RESTful, and concerns are not separated, you're essentially trying to drive server side applications by using hypermedia as the engine of application state, without noting that the application state should be on the client side. It's the difference between adding products to a clentside basket, and adding them to a server side basket, one is RESTful, the other is not. Best, Nathan
Jakob: (long-ish post...) A few things to consider when thinking about REST, 'loose coupling', and the examples you call out in your message... First, I think it's important to _not_ conflate loose coupling w/ evolvability over time. Several of your examples call out the possible negative consequences of making changes to an already-deployed system. Loose coupling _alone_ will not eliminate these problems. Second, Fielding identifies a set of Architectural Properties of Key Interest[1]. It is worth noting that loose coupling is not one of the items he identifies. He names seven properties there; one of which is Modifiability. He also breaks down Modifiability into five sub-properties. Two of them (Extensibility and Reusability), he states, can be encouraged through the use of loose coupling. Finally, to address the general examples in your question: HTTP AUTH HTTP Authentication was designed to be orthogonal, extensible, and negotiable. Web browsers today ship w/ support for a number of Auth schemes and will actively negotiate w/ servers for a 'best match.' Implementing solutions that support only one Auth scheme (i.e. OAuth) invites trouble for systems that must participate in heterogeneous distributed networks. Esp. if the server decides to _drop_ one scheme and replace it with another. A better tactic is to _add_ schemes over time, but not take them away. And only add schemes that support the HTTP Authentication extensibility|negoitation model. DOMAIN CHANGES As you point out, adding new 'concepts' to a domain space after deployment will cause problems. If you determine the new 'concept's MUST be supported by ALL clients and servers, you are likely to invalidate all existing participants. However, if you the new concepts are treated as optional, existing participants can continue to use the system alongside new clients and servers who can take advantage of the new concepts. Another approach is to add new concept's via Code-on-Demand (scripts, plugins, etc.) and thus push all the domain-specific details into hosted code on the client. The Web browser is a good example (again). This does not, however, solve the problem of required concepts for other servers. MEDIA TYPE CHANGES This is really the same issue as the DOMAIN CHANGES, I think. If you purposely make breaking changes (ones that do not support backward and|or forward compatibility) all clients and servers will need to be re-coded to accept the new media type. In my experience, making these breaking changes is rarely _required_ but is still often done. The W3C has a decent write up on techniques to avoid making these mistakes [2]. MY UNSOLICITED ADVICE I can pass along some techniques I employ when working to develop systems that can safely evolve over time. What follows is not "REST," just my personal approach to follow Fielding's principles. You will probably find others on this list with similar _and_ contradictory advice. Start with the Media Type(s) Find one that will fit your Protocol (HTTP, FTP, XMPP) needs first. XHTML is still my weapon of choice<g>. If you decide to design your own type; do the hard work needed to make it fully support the target transfer protocols (HTTP, FTP, etc.). Also, be sure to make the media type design allow for evolvability w/o breaking existing clients or servers. Also, if all participants "code-to-the-media-type" many problems of selecting technologies, languages, etc.become moot points. Clearly define your Domain Protocol You need to be able to identify the domain-specific elements of your design clearly. Use of "@rel" can go a long way to _expressing_ domain concepts, but might not be enough. Custom element names (in XML\JSON formats) or element @id or @class values (HTML family) may also be needed to fully express your domain. The domain protocol is usually the most difficult to document and express adequately. Taking the time to do it right here will reduce many problems over the issue of evolvability over time. Drive application flow through hypermedia links Stay clear of "orchestration" or other static forms of app-flow management. Instead provide links and forms within every response that the client can use to advance the application. Above all DO NOT write any application-flow code into the client. This will be the first (and most-often) aspect of your system to 'break' when a change is needed. Switching to hypermedia-driven coding is a major shift (esp. for client coding), but worth the effort. Keep Authentication completely orthogonal to your app-flow Do not make "log-in/log-out" part of your work flow. Always model authencation as a layer using the extensible/negotiable HTTP Auth model. Mange authorization by mapping transfer protocol methods to identified resources. IOW, make sure the public resource model is the model for which you provide authorization - nothing else. Maintain Separation of Concerns (SoC) between first-class elements of the system The following items should be treated as separate concerns: URIs, resources, representations, authentication, processing, and storage. IOW, any of these aspects of the system should be change-able w/o breaking other parts of the system. Product names, frameworks, storage medium, data formats, auth schemes, public URIs, etc.should all be able to evolve independently over time without harming any participants. I hopefully these comments give you some ideas on how to approach your own scenarios. MCA [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3 [2] http://www.w3.org/2001/tag/doc/versioning-strategies mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Sat, Feb 5, 2011 at 11:26, Jakob Strauch <jakob.strauch@...> wrote: > It is often stated, that RESTful services decouples client and server, as e.g. stated here [1]: > > "Coupling between client and server is removed, server owners need not know about client particularities to evolve the servers without breaking clients." > > But i think, the most server changes will break even the RESTfuls´ clients. At least in business scenarios: > > 1. Think about changing your application protocol due to business changes. Can a client be generic enough to compensate this? The link rel semantics of "next" are very clear. But what if i need a relation type, which is not yet described? Which is too domain specific. > > 2. What about the cross-cutting concerns like security? If i switch from HTTP Basic auth to OAuth, can a generic client adapt to this situation automatically? > > 3. What if i have to evolve a media type, which i´m using, and i need a new one? (for example by adding new link relations or data fields) > > In my opinion, i don´t see a business value in supporting 100% REST style. It might work with "simple" application protocols like ATOM (which is very nice, but also very generic). > > Maybe someone can enlighten me... > > [1] http://nordsc.com/ext/classification_of_http_based_apis.html > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
[...skipped the parts where I understand/agree with Eric's remarks...] "Eric J. Bowman" wrote: > > "Philippe Mougin" wrote: > > > > In the response, it seems also sensible to include information about > > what was processed and what was not, and why, i.e., what made the > > server unable to accept some of the data in the request. > > > Yes, that's the purpose of 202 and 409, but this is only sensible when > there's some sort of process requiring the user-agent's involvement in > confirming the change. If you're trying to save that round-trip, then > any information of that sort in a 200 response doesn't make any sense, > because the transaction has already completed. The transaction has completed but why wouldn't it make sense to return, with a 200, an entity that describes what went through and what didn't? That way, the client would know which part(s) of its request were "invalid". This seems useful information, as it may help, for example, someone on the client side (a user, a developer reading the logs, etc.) to correct the invalid information and resubmit it at some point. Philippe Mougin
Hi Jakob, On Feb 5, 2011, at 5:26 PM, Jakob Strauch wrote: > It is often stated, that RESTful services decouples client and server, as e.g. stated here [1]: > > "Coupling between client and server is removed, server owners need not know about client particularities to evolve the servers without breaking clients." > > But i think, the most server changes will break even the RESTfuls´ clients. At least in business scenarios: > By design, in a RESTful system, changes to the server can never break clients[1]. The interesting thing is that REST achieves that by being rather strict on what can actually change. IOW, the RESTful server developer has more constraints/guidance *what* to change than the RPC-server developer. MOst of the time, it will come down to a) compatibile changes to a media type: add new things older clients ignore by design (e.g. HTML, ATOM) b) incompatible changes to a media type: mint a new type and use content negotiation to answer clients based on their advertised (Accept header) capabilities. Bottom line: in a RESTful system you *can* evolve the server and be *sure* that you can without calling the client owners (something that e.g. Amazon could never ever possibly do due to the number of connected user agents). Does that help? Jan [1] Note that this does not cover the user intent because it is beyond the technical realm (with all connectors). For example, if you aim to buy a book on Amazon and Amazon stops selling goods but changes to be a site for looking up flight schedules then your expectations break. This is true for browser based purchase as well as automated agent based searches. And it would also be true if the user-side component (e.g. browser) talked to the server using RPC. It is a social aspect of networked, decentralized systems: No technology gives you the power to control the other side. It is only that RPC connectors give you the illusion you could while REST makes it explicit that you can't. > 1. Think about changing your application protocol due to business changes. Can a client be generic enough to compensate this? The link rel semantics of "next" are very clear. But what if i need a relation type, which is not yet described? Which is too domain specific. > > 2. What about the cross-cutting concerns like security? If i switch from HTTP Basic auth to OAuth, can a generic client adapt to this situation automatically? > > 3. What if i have to evolve a media type, which i´m using, and i need a new one? (for example by adding new link relations or data fields) > > In my opinion, i don´t see a business value in supporting 100% REST style. It might work with "simple" application protocols like ATOM (which is very nice, but also very generic). > > Maybe someone can enlighten me... > > [1] http://nordsc.com/ext/classification_of_http_based_apis.html > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Doh, On Feb 7, 2011, at 8:43 AM, Jan Algermissen wrote: > well as automated agent based searches That should have been 'as well as automated agent based purchases' Jan
On 07 Feb, 2011,at 08:43 AM, Jan Algermissen <algermissen1971@...> wrote: By design, in a RESTful system, changes to the server can never break clients[1]. The interesting thing is that REST achieves that by being rather strict on what can actually change. I should have clarified that this is so because REST constrains several aspects of the communication to be uniform/global (eg. the interface, the message 'format'). By not allowing these things to vary at will of the server developer REST isolates the 'location' of change. At the same time, REST provides features like content negotiation to gracefully handle change in those known areas. There is really no magic, just a very clever separation of concerns. Jan P.S. RPC, in contrast, allows any aspect of the communication to change (besides being 'procedure call'-based). Therefore anyting might happen due to a change and the architecture cannot provide any general (not service specific) support for dealing with change.
Am 05.02.2011 17:26, schrieb Jakob Strauch: > > [1] http://nordsc.com/ext/classification_of_http_based_apis.html While rereading the referenced classification from above, I stumbled about a few issues: 1. When describing a domain it's often not a main issue of missing media types. I think, one would rather quickly find an appropriated generic media type. However, the issue of describing a domain lies not only at the representation and process model level (which is generally also independent from the representation level, or?), the description of the domain itself is thereby very important - modelling the concepts of for instance 'user', 'order', 'offer' etc. This is for me exactly the point where Semantic Web knowledge representation languages on top of RDF Model can come into play. So I can still use a generic media type, i.e. RDF Model, for realizing a common description. However, serialize these descriptions into specific representation media types, i.e. XHTML+RDFa, and thereby maybe also extending the process model (based on a general description). Layering of media types was already propagated by Roy T. Fielding, or? All in all, I think, it depends more on the degree of existing appropriated Semantic Web ontologies to model (parts of) a domain, rather then on existing media types. While there can (theoretically) exist a huge variety of both and such a huge amount would decrease the simplicity property in both cases; I nevertheless think, that it is maybe better to have less media types and more (especially reusable) Semantic Web ontologies, rather then an equal high amount of media types. To summarize, I think the application, reutilization and (if needed) creation of Semantic Web ontologies fit quite well for emphasizing the desired properties that should be reached when implementing the REST architectural style. The costs that have to be invested into good ontology design maybe align with the costs of proper media type design. You maybe still addressed this issue somehow, when saying "media type (and link relation etc.) specifications". Anyway, I think, it might be good to make this concern a bit more explicit. You might not explicitly propagate the utilization of Semantic Web ontologies, but please make aware of the general existing 'description level' (cf. [1]), which (from my point of view) exists already, but is then often more implicit than explicit available. 2. I think, fulfilling the hypermedia as the engine of application state constraint is maybe still the hardest part. I cannot really imagine that "a transition from HTTP-based Type || to REST at a later point in time, however, is rather easy". Furthermore, "turning a HTTP-based Type || API into a REST API might be as easy as deleting the API documentation" is a bit paradox from my point of view. When I would remove the API documentation than this application cannot really be an API any more, or? When I would like to program against (?) something ( ;) ), I have to know how I could do that. For instance a web browser do not really program against (?) something. I think the term 'REST API' might be a bit inappropriate here (I still doubt that a implementation of service, which is fully REST compatible, is possible). The given examples for REST are not really APIs, or? - AtomPub is a protocol, OpenSearch a specification (collection of media type specifications), "RESTifying Procurement" an approach of a proof-of-concept REST "service"(?) (I couldn't really figure out the current state of that project, however it looks quite interesting). Although, the chosen descriptor is still 'REST' and not "REST API". So one might conclude that this could be a bit inappropriated classification, but the descriptions are explicitly suggesting the application of REST principles on the implementation of (Web) services ;) 3. Just a small issue: better "REST might be the best solution", rather than "REST is the best solution". I guess, 'is' requires a kind of proof, or? Could we really provide a complete proof about this? - I currently don't think so. That's all for the moment. Cheers, Bob [1] http://infoserviceonto.smiy.org/2010/11/25/on-resources-information-resources-and-documents/
Well, there are a lot of interessting arguments. Thanks for that! In the first place, "REST services" convinced me through the efficiency, clarity, lightweightness, support of links and the broad connectivity. Personally, i´m fine with a nice HTTP API at the moment, following as much REST principles as possible (speaking of time and money investment). At the moment the trade-off of being fully REST-complient is not high enough considering the investment. But thats for my own (business) scenario, where i control the clients, too. By the way, i want to thank all participants in this disscussion group for their valuable postings. I´m following this group for some months. Cheers, jakob --- In rest-discuss@yahoogroups.com, Bob Ferris <zazi@...> wrote: > > Am 05.02.2011 17:26, schrieb Jakob Strauch: > > > > [1] http://nordsc.com/ext/classification_of_http_based_apis.html > > While rereading the referenced classification from above, I stumbled > about a few issues: > > 1. When describing a domain it's often not a main issue of missing media > types. I think, one would rather quickly find an appropriated generic > media type. However, the issue of describing a domain lies not only at > the representation and process model level (which is generally also > independent from the representation level, or?), the description of the > domain itself is thereby very important - modelling the concepts of for > instance 'user', 'order', 'offer' etc. > This is for me exactly the point where Semantic Web knowledge > representation languages on top of RDF Model can come into play. So I > can still use a generic media type, i.e. RDF Model, for realizing a > common description. However, serialize these descriptions into specific > representation media types, i.e. XHTML+RDFa, and thereby maybe also > extending the process model (based on a general description). Layering > of media types was already propagated by Roy T. Fielding, or? > All in all, I think, it depends more on the degree of existing > appropriated Semantic Web ontologies to model (parts of) a domain, > rather then on existing media types. While there can (theoretically) > exist a huge variety of both and such a huge amount would decrease the > simplicity property in both cases; I nevertheless think, that it is > maybe better to have less media types and more (especially reusable) > Semantic Web ontologies, rather then an equal high amount of media types. > To summarize, I think the application, reutilization and (if needed) > creation of Semantic Web ontologies fit quite well for emphasizing the > desired properties that should be reached when implementing the REST > architectural style. The costs that have to be invested into good > ontology design maybe align with the costs of proper media type design. > You maybe still addressed this issue somehow, when saying "media type > (and link relation etc.) specifications". Anyway, I think, it might be > good to make this concern a bit more explicit. You might not explicitly > propagate the utilization of Semantic Web ontologies, but please make > aware of the general existing 'description level' (cf. [1]), which (from > my point of view) exists already, but is then often more implicit than > explicit available. > > 2. I think, fulfilling the hypermedia as the engine of application state > constraint is maybe still the hardest part. I cannot really imagine that > "a transition from HTTP-based Type || to REST at a later point in time, > however, is rather easy". Furthermore, "turning a HTTP-based Type || API > into a REST API might be as easy as deleting the API documentation" is a > bit paradox from my point of view. When I would remove the API > documentation than this application cannot really be an API any more, > or? When I would like to program against (?) something ( ;) ), I have to > know how I could do that. For instance a web browser do not really > program against (?) something. > I think the term 'REST API' might be a bit inappropriate here (I still > doubt that a implementation of service, which is fully REST compatible, > is possible). The given examples for REST are not really APIs, or? - > AtomPub is a protocol, OpenSearch a specification (collection of media > type specifications), "RESTifying Procurement" an approach of a > proof-of-concept REST "service"(?) (I couldn't really figure out the > current state of that project, however it looks quite interesting). > Although, the chosen descriptor is still 'REST' and not "REST API". So > one might conclude that this could be a bit inappropriated > classification, but the descriptions are explicitly suggesting the > application of REST principles on the implementation of (Web) services ;) > > 3. Just a small issue: better "REST might be the best solution", rather > than "REST is the best solution". I guess, 'is' requires a kind of > proof, or? Could we really provide a complete proof about this? - I > currently don't think so. > > > That's all for the moment. > > Cheers, > > > Bob > > [1] > http://infoserviceonto.smiy.org/2010/11/25/on-resources-information-resources-and-documents/ >
Bob Ferris wrote: > > All in all, I think, it depends more on the degree of existing > appropriated Semantic Web ontologies to model (parts of) a domain, > rather then on existing media types. While there can (theoretically) > exist a huge variety of both and such a huge amount would decrease > the simplicity property in both cases; > Don't forget portability, see Chapter 2.3.6. > > I nevertheless think, that it is maybe better to have less media > types and more (especially reusable) Semantic Web ontologies, rather > then an equal high amount of media types. > This is more a reality than an opinion, and exactly the point I've been trying to make for some months now. There's a reason that only a relative handful of media types are ubiquitous, and it's unreasonable to expect that it's even possible for even a hundred media types to be reliably implemented across client, server, and intermediary components which make up the deployed infrastructure of the Web. That any system involves domain-specific vocabulary is pretty much a given; this is not justification for solving this problem through the creation of domain-specific media/data types. > > The costs that have to be invested into good ontology design maybe > align with the costs of proper media type design. > No, they are infinitely less. Since an ontology has no effect on messaging amongst connectors, no standardization is required, allowing the "invisible hand of the market" to decide which ontologies are most useful. GoodRelations is a de-facto standard (unless and until something better comes along), not an official one. An ontology is an order of magnitude less work to design than data/media types, even without considering the high opportunity cost of marshalling a standards effort (which takes years, followed by even more years until new standardized media types achieve ubiquitous deployment). Which is why I believe that REST must be the basis for the Semantic Web. The two disciplines solve different problems at different layers, separating concerns nicely. The major impediment to SemWeb these days, as I see it, is the failure of its deployed examples to follow proper architecture for the medium. The problems Roy addresses in Chapter 2.3.1's second paragraph, apply to any and all uses of the Web. REST solves for the fundamental problem of the nature of *any* network, i.e. getting either data or agents from point A to point B, constrains the set of applicable points of architecture (why the thesis only considers *networked* and excludes *desktop* software architectural styles). > > 2. I think, fulfilling the hypermedia as the engine of application > state constraint is maybe still the hardest part. I cannot really > imagine that "a transition from HTTP-based Type || to REST at a later > point in time, however, is rather easy". > I agree; the cache constraint is rather easy to fiddle with over time, but the uniform interface constraints are not at all easy to correct if they're gotten wrong initially. > > Furthermore, "turning a HTTP-based Type || API into a REST API might > be as easy as deleting the API documentation" is a bit paradox from > my point of view. > In some cases, it's true. This is kinda the point of the hypertext constraint -- if the API is self-documenting, what's left to explain? See this discussion, for an example of a self-documenting REST API, both before and after my modifications to its code: http://tech.groups.yahoo.com/group/rest-discuss/message/17057 Fully RESTful, like many Web interfaces in the wild, just not sexy enough to meet folks' expectations of what REST is. > > When I would remove the API documentation than this application > cannot really be an API any more, or? When I would like to program > against (?) something ( ;) ), I have to know how I could do that. > Exactly the point of the hypertext constraint. Prior discussion on the meaning of "API" may be found here: http://tech.groups.yahoo.com/group/rest-discuss/message/16052 > > For instance a web browser do not really program against (?) > something. > Browsers are coded against standardized media types; it's this shared understanding of the processing model which makes out-of-band documentation mostly irrelevant. If your documentation has to explain to me that your media type means that <url> contains a URI, then you're doing too much work -- you don't need to tell me that, in text/html, @href contains a URI (if you have, then your documentation probably can be deleted without consequences). > > I think the term 'REST API' might be a bit inappropriate here (I > still doubt that a implementation of service, which is fully REST > compatible, is possible). > A REST API is what you, the developer, creates. It is possible to fully implement the constraints, to the extent that the chosen protocols implement them. The fact that HTTP 1.1 doesn't fully implement self- descriptive messaging, doesn't mean you can safely ignore using registered media types -- that aspect of the constraint is something you do have control over. Of course, the complexity of meeting the constraints (to the overall, long-term benefit of simplicity) rises with the complexity of the system. > > The given examples for REST are not really APIs, or? > Most examples of REST are examples of things which aren't really REST. Furthermore, just what sort of canonical example are you expecting from a tool which may be used to guide the development of transfer protocols like HTTP, application protocols like AtomPub, and APIs which implement one or even both of the former? More detailed response pending for the other thread on this topic, still... > > 3. Just a small issue: better "REST might be the best solution", > rather than "REST is the best solution". I guess, 'is' requires a > kind of proof, or? Could we really provide a complete proof about > this? - I currently don't think so. > Disagree. If you're developing a distributed hypertext system, then Roy's thesis is scientific proof that there's a set of constraints which describe an optimal behavioral model. Said thesis is subject to falsification, which hasn't occurred. The constraint that's most susceptible to falsification would be client-server, in an age of P2P protocols, but I've not seen it. Considering all the peer review Roy's thesis has had, with proof evident in the success of the Web, I'd say that if anything was overlooked or outright wrong, somebody would have spotted it by now. -Eric
Bob Ferris wrote: > > I think we can lead here this discussion even more into philosophy. > Although, I guess, this is not really necessary here and not intended > by the aim of this mailing list. > Of course it is. Roy's thesis is a doctoral dissertation, Ph.D. doesn't stand for "pile it higher 'n' deeper" for nothin'. Science can be described in a nutshell, as conceptualizing working models of objective reality through the use of precise terminology. Such models are always subject to falsification, revision, and updating -- see ARRESTED and CREST. I haven't seen any convincing falsification of REST's constraints, though. IOW, REST is at heart, a philosophical model of a real-world system. The second paragraph of section 2.3.1 is one example of the fundamental facts constraining distributed hypertext systems. At this point, I believe the Internet has become a force of nature. Anything you're going to try to do with it, has to abide by certain natural laws, governing the transfer of data or agents from point A to point B. REST is the best explanation for how the staggering, exponential growth of the Web was even *possible* when, in the early days, its bandwidth and IP-address requirements threatened to consume the Internet itself. 2011 marks the year when we've passed the "peak oil" moment in IP 4. Enough folks got enough of REST right, that the Web wasn't a victim of its own success -- the exponential growth continues, and is finally resulting in the implementation of IP 6. Wow. I mean, just, wow. Hats off to Roy et al on that one. REST appeals to me, because it explains to me that this didn't happen by chance but rather, by design. Much of REST is up to the protocol, not the API developer, to be gotten right or not. Obviously, somebody got something right. I'm not some fanboy putting this on par with E=mc2, but they're both philosophical models, but unlike physics you can't generally express computer science concepts in mathematical terms. > > schrieb Eric J. Bowman: > > > > I'll probably write a more detailed response over the weekend. In > > short, REST is an idealized model of distributed hypertext system > > behavior. > > Yes, I'm absolutely aware of it. However, can we reach that "ideal" > fully, or do we only try approximate it all the time? > REST is a tool, not a result. I'm not even *trying* to "implement REST" in any system I design. I'm trying to use the principles in Roy's thesis to derive the best architecture for a system's needs. My key takeaway from REST and AWWW architecture from 50,000 feet: the Web is, by nature, a distributed hypertext system. So the first thing I do when approaching any problem space, is to conceptualize the solution within that idiom. Maslov's hammer is what happens when one assumes that REST is the one true solution to any problem whose solution involves HTTP. If the problem space I'm analyzing doesn't make sense within the distributed hypertext solution idiom, then I'm not really interested in working on that project, as that is not where my expertise lies. So I would advise that REST is *not* the proper solution, in such cases. If we are dealing with anything which *can* be conceptualized as a distributed hypertext system, then I'm interested in how Roy's thesis applies to its design, particularly its implementation on the Web via HTTP. But it also applies to designing protocols like HTTP, which is why it's hard to answer the example question... Malamutes have distinctive facial markings, either the "mask" or the "cap", both of which are perfectly OK in (and required by) the breed standard. But no individual Malamute could possibly exhibit both, and still be called a Malamute, so pointing to one implementation or the other can be even more confusing than explaining how to use REST as a (non-golden-hammer) tool. What points of breed the Malamute breeder is after, is irrelevant to a discussion about the line-breeding of dogs. IOW, REST is properly a philosophical consideration, particularly since any example you can point to on the Web, is based on an implementation of REST known *as* the Web. My posts actually are example-driven, and the work I've posted does tend to have pragmatic value because I've always felt REST explains conclusions I'd already come to on my own, by developing websites for several years before REST was even written. But, I always try to qualify that I'm talking about pragmatic considerations of Web development more than REST purity. I discuss the REST mismatches in my system, why they're there, what if anything I intend to do about them, and how I prioritize those over time. How can I do a cost-benefit analysis over the lifetime of a project, if I have no idealized model against which the system may be compared? > > How can we then state that when we would reach this "ideal", that > exactly this "ideal" can emphasize the propagated features? Don't we > need an exemplification? > Actually, if one goes back to my debut de-lurk on this list, one will find quite an entertaining brawl between myself and Roy on this very issue. I took Roy's counter-example as an insult, before it dawned on me that he was serious (if not obscure). The fact that I've come full-circle and am now defending the position I argued against back then, makes me recall this little gem: http://www.youtube.com/watch?v=mlykr-vUtoQ (Dennis Miller master-debating himself on Supernews) One reason this is so, is that any time I've posted an example of my work over the years, it's been bound to be taken the wrong way by some, i.e. "Eric says all Malamutes should have 'mask' facial markings." No, that's simply my preference. My pragmatism, like stating that server- side image rotation isn't a POST, but GET /image.jpg?rot=90&flip=vert touches off reams of controversy where anyone can take an ad-hominem shot that this is only "my opinion" when I only mean it as one example of a RESTful interaction. Others are possible, but not by using POST to query, in this architectural style, so of _course_ it's my opinion, and REST isn't about URI design, so of _course_ YMMV, what folks seeking to learn REST by example take away are all the wrong things, instead of the design patterns the examples illustrate. Which is why I've always been skeptical that REST is something only Roy understands, and nobody else can possibly learn well enough to impart to others -- a theme I do tend to get back to now and then, made all the more frustrating by the fact that nobody wants to take anyone's word but Roy's on even the most fundamental truths REST reveals. My example of a RESTful weblog, most often leads to the conclusion that REST (and Atom) is fine for that purpose, but impractical for anything "serious". So the example hurts the learning process for anyone seeking the instant gratification of understanding REST comprehensively, by simply looking at an implementation or two, because they fail to apply the example outside the context in which it was given. > > > > > REST is not a result, it's a tool, which is why nobody gets an > > answer when asked for a link to a RESTful service > > It's bad, when one gets no answer. It's like that I can claim > everything and if someone asks why I claim this, then I would answer > that I don't have to answer you that or simple don't answer at all > (no answer is also an answer). > From the other side of the fence, it's bad when efforts like mine to take a proprietary development effort and distill it into examples I'm willing to share, get assaulted every which-way under the sun, to the point of being flamed for the broken links. Broken links are very much a part of the style, however. I'm sure the CMS Roy works on at his day job must be RESTful, but it's also proprietary, which many RESTful systems are because it's been more useful in controlled settings than on the Web (until only recently). Compare the number of developers today who know that methods other than GET or POST even exist, with that number from a decade ago. The model isn't always followed, which hampers adoption of the model, and leads to the pervasive belief that also using PUT and DELETE to do CRUD must be what Roy's on about (i.e. HTTP == REST). > > Maybe hypermedia/hypertext is now an old hat and we have now entered > other dimensions of interaction design. > Solid fundamentals. I don't like being called religious on the issue, but you could call me a fundamentalist. I often say that there's no best architecture, only the architecture that's best for your system. If you're modeling a distributed hypertext system, then REST applies, even when you're extending it (again, see ARRESTED and CREST). E=mc2 turns out not to have been the penultimate model, according to Hawking, but it certainly allows one to solve equations accurately enough to get real-world work done because it was extended, not falsified. The worst mistake any Web developer can make, is to assume that REST has somehow become obsolete. Particularly because it's a tool, not a result. > > So if there are applications of HTTP and URI that do match the > application model of a distributed hypermedia system (which is > obviously the case), then there might be at least one which fulfils > the constraints of REST. Otherwise, how does we know that "REST does > capture all of those aspects of a distributed hypermedia system that > are considered central to the behavioral and performance requirements > of the Web". > I think the whole point of ARRESTED and CREST is that REST doesn't capture certain aspects of what folks want to do with the Web, which is unsurprising. Assuming that REST needs to be extended to solve any particular problem, represents a fundamental misunderstanding of the nature of REST as a tool. Knowing where an implementation deviates from the ideal model, and what the consequences are, is the knowledge a disciplined approach to REST gives me. I'd be developing blindly without it. > > Anyway, I tend now to be a bit more sceptical about the > implementation of the whole set of architectural constraints of REST. > That's because you're thinking about REST in terms of implementing a set of architectural constraints, i.e. as a result not a tool. Roy's dissertation, in Chapter 2, lays out the desirable properties of a distributed hypertext system. It then goes on to explain how each of those properties is manifested in various architectural styles. REST itself, explains what constraints are needed from those other styles, to bring about the desired properties for distributed hypertext systems. So a disciplined approach to REST involves listing those desirable properties which are relevant to your system, and determining which are of immediate concern and which are not. One may then apply those constraints, and only those constraints, which are consistent with those goals -- or add any other constraints whose goals aren't covered by REST (see ARRESTED, CREST). And ignore those constraints which are not relevant to the situation, but hopefully leaving an opening to implement them should the situation change. I'm all about using REST to inform decision-making, not dictate it, which is what I mean when I say it's a tool not a result. > > Which service (or in general application) is 100% RESTful? > As long as the realized architecture is appropriate for the system, what does it matter? The problem with example-seekers, is they're after examples of 100% RESTful solutions to their specific problems, which may not exist; and they don't accept a link like this: http://www.iana.org/cgi-bin/mediatypes.pl Because it doesn't fit with their preconceptions of REST, or illustrate what they're trying to do. But that doesn't change the fact that there's no constraint violation there, aside from those inherent in the HTTP protocol -- 100% RESTful job by the API developer. > > (I tried all the time hard to find one; even in the "non-RESTful" > blog from Roy T. Fielding, this question was asked several times in > the comments; however, without a response). > Why should Roy's blog be fully RESTful? It gets the job done. But, WordPress makes a fine example of the problems REST is meant to solve. Roy's just publishing a single weblog, wordpress.com attempts to host millions of weblogs on an architecture which doesn't scale to that need. The barely-two-nines uptime of wordpress.com in 2010 screams architectural problems -- none of which surprise me, because they're predicted by REST. The reason my weblog demo exists, but my weblog doesn't, is that I'm not trying to simply publish a weblog -- I'm developing a RESTful blogging system which *does* scale to host millions of weblogs, far more efficiently than does WordPress MU. When that's done, I'll have a weblog; if that weren't my goal, I'd just run WP and be done with it, too. Pragmatism is the rule, but it isn't pragmatic to attempt scaling WordPress into a hosting platform. > > Do we really need the "hypermedia as the engine of application state" > for services? > No. You need to apply the hypertext constraint to induce in your system the desirable properties of scalability, evolvability, and visibility. If these properties aren't important to the problem space, then applying said constraint would only be done for brownie points -- which is using REST as a result, not a tool. I'm more impressed with architectures which are appropriate to the system being designed, than I am with ideological purity. > > Is it highly responsible for "user-perceived performance" (latency, > which should be minimum as possible). > User-perceived performance depends on the design of the media type. It doesn't matter if the transfer hasn't completed, if it's already started rendering on-screen. IOW, latency isn't measured by file-transfer time, but by the time it takes to begin rendering. Consider the pattern for HTML tables: <table> <thead> <tfoot> <tbody> </table> If <tfoot> came after <tbody>, then the entire <tbody> would need to finish transferring before <tfoot> could be rendered on-screen. This is exactly why <tfoot> comes before <tbody> -- to allow the table to be rendered before all its rows are transferred. In non-browser terms, the data type needs to be stream-processable. Which is another gripe I have with custom data types -- progressive rendering never seems to be taken into account. But, that doesn't have to do with the hypertext constraint, which can be met just as easily with a data type that isn't capable of progressive rendering. > > Is "hypermedia as the engine of application state" only a feature for > web-browser-like applications? > No, a call-control system using CCXML and VoiceXML has nothing to do with browsers. If such a system were RESTfully designed, then its behavior would mimic that of a well-designed Web application, as Roy explains in the fourth paragraph of Chapter 6.1. This is the problem of examples -- if all linkable examples are Web applications, then the conclusion is that REST must only be for Web applications. The proper takeaway is that a RESTful call-control system and a RESTful Web API will exhibit the same behavior, because they use the same pattern of distributing a uniform connector interface as hypertext. > > Is the set of REST constraints as a whole maybe overrated? > Yes, because the forest gets missed for the trees. To dust off my Grateful Dead analogy, there were some shows where Jerry Garcia's presence was merely physical -- all the constraints were there, but the results crossed the line between music and noise, because there was no harmony between them. Enlightenment lies in the interplay between the constraints. You can't have caching without layering; but even with caching and layering, if your resources are improperly identified there will be a detrimental impact on caching overall, so the system won't exhibit the desired characteristics even though each constraint is implemented. > > All the so-called "RESTful APIs" live mainly without the "hypermedia > as the engine of application state" and can often scale quite well by > delivering a "user-perceived performance". > Chapter 2 lists far more desirable characteristics, than scalability alone. A highly scalable system that can't adapt to circumstances which change over time, isn't of much value. Nobody has ever claimed that tightly-coupled systems don't work, just that they aren't very evolvable, maintainable, visible, portable etc. > > Isn't it often even more the design and utilization of the > server-side hardware and the Internet connection that is responsible > for the scalability? > No. Take another look at the definition of scalability -- the ability for anyone on the entire planet to view HTML, is "Internet scale". It doesn't apply to custom media types, which are understood by so few clients that the problems of "Internet scale" never manifest themselves in terms of resource utilization (REST solves the "slashdotting" problem, but that problem only occurs when publishing ubiquitous media types like HTML in the first place). From Chapter 6.5.1: "[P]erformance is only bounded by the protocol design and not by any particular implementation of that design." My PHP-driven demo has the latency of a bowel movement, which matters not one whit on cache hits out on the network -- the style compensates for such implementation deficiencies, allowing my implementation to scale far beyond what the server is capable of, for example by caching. I can always replace the server code, without changing the API, to implement the protocol more efficiently within the CPU and bandwidth limits already established. To me, high performance means that resource utilization rises in a sub- linear fashion as more users are introduced to the system. Others call this scalability, but I go by the thesis when discussing REST. Anyway, it's the design of the protocol and the API which constrains server-side resource utilization. Look at how many concurrent WebSockets requests a given server can handle, vs. how many concurrent HTTP requests that same server can handle, over the same pipe. That order-of-magnitude difference on the same hardware/connection, indicates that performance isn't a function of hardware or Internet connectivity, so much as architecture. > > So, if I create an application that is fully REST compatible, does > this include a "guarantee" for emphasizing the above stated features? > REST guarantees that "optimizing behavior within the model will result in optimum behavior within the deployed Web architecture". A model is required, because there are just too many variables in the deployed infrastructure for any meaningful testing protocol to account for. > > I really like true-REST. However, the question is still: do we really > need it? > Ask wordpress.com how well non-RESTful architecture's been working out for them. Or Facebook, or anything else which cracks under the strain of Internet scale. Yeah, Facebook sure has lots of users, but would anybody pay money for a service that slows down and breaks that frequently? It isn't the number of users that proves scalability (using the common-vernacular definition), it's proven by decreasing resources per user, as more users are added. REST is geared towards just this purpose. > > I think also Roy T. Fielding maybe stopped bothering meanwhile about > this, or? > Roy, and others, have decreased their participation on this list as their participation in HTTPbis has increased; in Roy's case, there's been a noticeable drop in all participation since his son was born. I don't interpret this as caring less about the subject -- also, looking back a few years, the quality of advice being given by others has gone up to the point where corrections from Roy have become less necessary. -Eric
Hi Eric, Am 08.02.2011 02:05, schrieb Eric J. Bowman: > > Don't forget portability, see Chapter 2.3.6. > Did I? I think, Semantic Web ontologies are even more portable than media types, or? However, they are relying on the provided and consumable serialization media types. A client should be able to consume at least one of the provided serialization media types for Semantic Web ontologies. A server should provide as much as possible serialization media types for Semantic Web ontologies. > That any system involves domain-specific vocabulary is pretty much a > given; this is not justification for solving this problem through the > creation of domain-specific media/data types. > That's why, I would prefer the application, reutilization and (if needed) creation of specific appropriated Semantic Web ontologies, rather than domain specific media types. >> >> The costs that have to be invested into good ontology design maybe >> align with the costs of proper media type design. >> > > No, they are infinitely less. Since an ontology has no effect on > messaging amongst connectors, no standardization is required, allowing > the "invisible hand of the market" to decide which ontologies are most > useful. GoodRelations is a de-facto standard (unless and until > something better comes along), not an official one. > > An ontology is an order of magnitude less work to design than > data/media types, even without considering the high opportunity cost of > marshalling a standards effort (which takes years, followed by even > more years until new standardized media types achieve ubiquitous > deployment). The standardization is the utilization of RDF Model in this case. I think e.g. when I include an RDF triple in a HTTP Link header, than this statement could also be interpreted by a connector, or? I truly believe that good ontology design can be quite expensive and could take also several years, but that's common evolution. An ontology with a quite long evolution is for example the FOAF vocabulary. It's the inherent freedom that is given us by the Semantic Web to only define de-facto standards for domain specific ontologies, rather than "true standards". Furthermore, this is a requirement when trying to model parts of the world. There would never be an ultimate solution of a concrete domain-specific ontology and also for a graspable overall ontology. > > Which is why I believe that REST must be the basis for the Semantic Web. I guess, that's the point of view, I tried to argue all the time in the recent discussions here, or? ;) >> Furthermore, "turning a HTTP-based Type || API into a REST API might >> be as easy as deleting the API documentation" is a bit paradox from >> my point of view. >> > > In some cases, it's true. This is kinda the point of the hypertext > constraint -- if the API is self-documenting, what's left to explain? Following Martin Fowler's explanation of self-documenting protocols[1], I would say: yes it might be quite interesting for a developer to explore the protocol. However, it's hard to imagine that such a exploring which should give hints, would satisfy a developer who likes to apply an API for programming an application (I guess, client developers keep rarely an eye out for unknown links, but rather than for announcements of new API functionalities). I think, a developer likes to use parts of that API as needed by mainly reading the documentation and viewing examples (which are usually even more important). Okay, I can maybe include these examples in the "exploring mode". Although, pure exploring sounds for me still a bit like hacking ;) To summarize, I currently don't think that the self-documenting could outperform an additionally provided documentation. For example machine processable URI template application descriptions (cf. [2]) might be an option that could allow a server to also change its URI scheme without breaking the clients. > Browsers are coded against standardized media types; it's this shared > understanding of the processing model which makes out-of-band > documentation mostly irrelevant. Yes, but Web browser are simply only one application type (it would be nice to exclude this specific type from further discussions of this kind ;) ; however, this shouldn't include an exclusion of utilizing hypermedia - or does such an utilization even make my application to a Web browser like application?). For example, I thought recently about the combination of XAML and RDFa. On the side, there is the quite comprehensive user interface markup language XAML ( there many comparisons of XAML and HTML out there). On the other side, there is RDFa to bring the power of (possibly) comprehensive resource descriptions into other markup languages (see [3]). Maybe a combination of them would be delicious. >> The given examples for REST are not really APIs, or? >> > > Most examples of REST are examples of things which aren't really REST. > Furthermore, just what sort of canonical example are you expecting from > a tool which may be used to guide the development of transfer protocols > like HTTP, application protocols like AtomPub, and APIs which implement > one or even both of the former? More detailed response pending for the > other thread on this topic, still... > I think, you elude my question here. Maybe I'll find an answer in your response to the other thread. >> >> 3. Just a small issue: better "REST might be the best solution", >> rather than "REST is the best solution". I guess, 'is' requires a >> kind of proof, or? Could we really provide a complete proof about >> this? - I currently don't think so. >> > > Disagree. If you're developing a distributed hypertext system, then > Roy's thesis is scientific proof that there's a set of constraints > which describe an optimal behavioral model. Said thesis is subject to > falsification, which hasn't occurred. The constraint that's most > susceptible to falsification would be client-server, in an age of P2P > protocols, but I've not seen it. Considering all the peer review Roy's > thesis has had, with proof evident in the success of the Web, I'd say > that if anything was overlooked or outright wrong, somebody would have > spotted it by now. Okay, then we have to come up with a disagreement here ;) I still insist to my argumentation regarding this issue here. Cheers, Bob [1] http://martinfowler.com/articles/richardsonMaturityModel.html [2] http://www.semanticoverflow.com/questions/2858/uri-template-specifications-for-linked-data-publishing [3] http://www.w3.org/TR/rdfa-core/
Eric, Thank you very very much for that rather long but quite comprehensive answer, which includes many interesting aspects for me while trying to understand REST. I think, my hunger (re. REST grasping) is now satisfied and the important questions regarding REST understanding are answered at least for me. Am 08.02.2011 08:03, schrieb Eric J. Bowman: > That's because you're thinking about REST in terms of implementing a > set of architectural constraints, i.e. as a result not a tool. Well, that might be a misunderstanding here. I always viewed and view and will view REST as tool, when trying to apply its constraints to my application implementation as needed or appropriated. I've just wondered a bit about the statement regarding applying the whole set of REST constraints and interpreted it as a kind of must when I try to reach the propagated system properties. Anyway, I think that's clear for me now. Btw, I believe that RDF powered descriptions can also be "Internet scale" ;) So thanks again for all your efforts here. Cheers, Bob
The First Workshop on Adoption of REST and LinkedData (ARALD 2011)
in conjunction with SAINT 2011 Munich, Germany (July 18-22, 2011)
(The 11th IEEE/IPSJ International Symposium on Applications and the
Internet www.saintconference.org/) (ARALD 2011 is one of the SAINT
2011 Workshops.)
Important Dates
Workshop Paper Submission: February 28, 2011
Workshop Author Notification: March 28, 2011
Paper Final Manuscript: May 2, 2011
Paper Author Registration Due: May 2, 2011
(ONE Registration per ONE paper (Full-rate for ONE Workshop paper))
Theme
The concept of Web as middleware gains further appeal with a wider
audience. Many
services, e.g. Twitter, see significant traffic via their API. The
demand for data
level access to services on the Web through APIs is an indicator of
the emergent Web
of Data. The progress of the Web of Data is accompanied by gathering
momentum behind
REST in the enterprise and as the foundation of many Cloud related
interface
specification (and possibly standardization) activities. One aspect of
RESTful web
based M2M interactions concerns the ongoing discussion about data
models. One
contender in this area is RDF, which has been reinvigorated in recent
years by the
campaign for LinkedData, which has placed a stronger emphasis on
actual published
data. This can be seen in various Governmental initiatives publishing
public data; a
high-profile demonstration of the value of connecting data across
otherwise separated
systems. One theory is that domain-specific data formats will slowly
start to yield to
the necessity for a common data format, especially, as the continuing
trend towards a
Web of data encourages the conception of “bigger†applications with
increased
information and reach. We welcome papers from researchers developing
web-based APIs
and data intensive Cloud services; associated methodologies,
strategies, business
models, challenges and data models.
Topics of Interest
Business models and applications of LinkedData
Adoption of Semantic Web technologies by enterprises
APIs for LinkedData
Dynamic LinkedData
Enterprise software and LinkedData
Application of LinkedData in the Enterprise
REST and Cloud standardisation
RESTful API development methodologies
Submission
Workshop paper submission will be done electronically. Information
for prospective authors, including paper format and instructions can
be found on the SAINT Web page.
ARALD Web Page: http://snowman.nagaokaut.ac.jp/saint/workshop-CFPaper/ws-9.html
SAINT Web Page: http://snowman.nagaokaut.ac.jp/saint/
Submissions managed by EasyChair.
http://www.easychair.org/conferences/?conf=saint2011
Supports
This workshop is partially supported by Japan and France JST-ANR joint
grant-in-aid
for Peta-Flow. (Principal Researcher: Dr. Shinji Shimojo, NICT, Japan
and Dr.A.Hirtum,
Grenoble Univ., France).
Organizer
Roger Menday, Fujitsu Laboratories of Europe, UK (roger.menday@...
)
Program Committee
Carlos Buil Aranda, Universidad Politécnica de Madrid, Spain
Donal Fellows, University of Manchester, UK
Stavros Isaiadis, Fujitsu
Alexander Papaspyrou, Technische Universität Dortmund, Germany
Michael Parkin, Tilburg University
Bernd Schuller, Juelich Supercomputing Centre, Germany
Axel Tanner, IBM Zuerich
David Wood, Talis
ãƒã‚¸ãƒ£ãƒ¼
Roger Menday, roger.menday@...
Tel: +44 (0) 208 606 4534
Fujitsu Laboratories of Europe Limited
Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE, U.K.
______________________________________________________________________
Fujitsu Laboratories of Europe Limited
Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE
Registered No. 4153469
This e-mail and any attachments are for the sole use of addressee(s) and
may contain information which is privileged and confidential. Unauthorised
use or copying for disclosure is strictly prohibited. The fact that this
e-mail has been scanned by Trendmicro Interscan does not guarantee that
it has not been intercepted or amended nor that it is virus-free.
On Feb 7, 2011, at 10:31 PM, Jakob Strauch wrote: > > At the moment the trade-off of being fully REST-complient is not high enough considering the investment. Hmm - which investment are you talking about? Are you saying that you think it is 'cheaper' to define a bunch of XMLs (one per service) and a bunch of service descriptions (maybe one WADL per service) that defining one (or very few) domain-related (and *not* service specific) media types? In my opinion the 'high cost of doing real REST' is nothing but a myth - maybe due to a still very common lack of what 'real' REST actually involves. Jan > But thats for my own (business) scenario, where i control the clients, too. > > By the way, i want to thank all participants in this disscussion group for their valuable postings. I´m following this group for some months. > > > Cheers, > jakob > > --- In rest-discuss@yahoogroups.com, Bob Ferris <zazi@...> wrote: >> >> Am 05.02.2011 17:26, schrieb Jakob Strauch: >>> >>> [1] http://nordsc.com/ext/classification_of_http_based_apis.html >> >> While rereading the referenced classification from above, I stumbled >> about a few issues: >> >> 1. When describing a domain it's often not a main issue of missing media >> types. I think, one would rather quickly find an appropriated generic >> media type. However, the issue of describing a domain lies not only at >> the representation and process model level (which is generally also >> independent from the representation level, or?), the description of the >> domain itself is thereby very important - modelling the concepts of for >> instance 'user', 'order', 'offer' etc. >> This is for me exactly the point where Semantic Web knowledge >> representation languages on top of RDF Model can come into play. So I >> can still use a generic media type, i.e. RDF Model, for realizing a >> common description. However, serialize these descriptions into specific >> representation media types, i.e. XHTML+RDFa, and thereby maybe also >> extending the process model (based on a general description). Layering >> of media types was already propagated by Roy T. Fielding, or? >> All in all, I think, it depends more on the degree of existing >> appropriated Semantic Web ontologies to model (parts of) a domain, >> rather then on existing media types. While there can (theoretically) >> exist a huge variety of both and such a huge amount would decrease the >> simplicity property in both cases; I nevertheless think, that it is >> maybe better to have less media types and more (especially reusable) >> Semantic Web ontologies, rather then an equal high amount of media types. >> To summarize, I think the application, reutilization and (if needed) >> creation of Semantic Web ontologies fit quite well for emphasizing the >> desired properties that should be reached when implementing the REST >> architectural style. The costs that have to be invested into good >> ontology design maybe align with the costs of proper media type design. >> You maybe still addressed this issue somehow, when saying "media type >> (and link relation etc.) specifications". Anyway, I think, it might be >> good to make this concern a bit more explicit. You might not explicitly >> propagate the utilization of Semantic Web ontologies, but please make >> aware of the general existing 'description level' (cf. [1]), which (from >> my point of view) exists already, but is then often more implicit than >> explicit available. >> >> 2. I think, fulfilling the hypermedia as the engine of application state >> constraint is maybe still the hardest part. I cannot really imagine that >> "a transition from HTTP-based Type || to REST at a later point in time, >> however, is rather easy". Furthermore, "turning a HTTP-based Type || API >> into a REST API might be as easy as deleting the API documentation" is a >> bit paradox from my point of view. When I would remove the API >> documentation than this application cannot really be an API any more, >> or? When I would like to program against (?) something ( ;) ), I have to >> know how I could do that. For instance a web browser do not really >> program against (?) something. >> I think the term 'REST API' might be a bit inappropriate here (I still >> doubt that a implementation of service, which is fully REST compatible, >> is possible). The given examples for REST are not really APIs, or? - >> AtomPub is a protocol, OpenSearch a specification (collection of media >> type specifications), "RESTifying Procurement" an approach of a >> proof-of-concept REST "service"(?) (I couldn't really figure out the >> current state of that project, however it looks quite interesting). >> Although, the chosen descriptor is still 'REST' and not "REST API". So >> one might conclude that this could be a bit inappropriated >> classification, but the descriptions are explicitly suggesting the >> application of REST principles on the implementation of (Web) services ;) >> >> 3. Just a small issue: better "REST might be the best solution", rather >> than "REST is the best solution". I guess, 'is' requires a kind of >> proof, or? Could we really provide a complete proof about this? - I >> currently don't think so. >> >> >> That's all for the moment. >> >> Cheers, >> >> >> Bob >> >> [1] >> http://infoserviceonto.smiy.org/2010/11/25/on-resources-information-resources-and-documents/ >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
"It's the Architecture, Stupid!" Previously awarded to Facebook for DDoS'ing itself as a result of their aversion to users encountering 500 responses. Award #2 goes to Gawker: http://isolani.co.uk/blog/javascript/BreakingTheWebWithHashBangs G'head, fight the Web... -Eric
The Visibility of an interface´s properties is an important aspect to code against it. In theory i need only a single starting URL to "explore" a RESTful system. HEAD, OPTIONS can be used to retrieve supported HTTP operations, the mediatype, supported security mechanismens. This can subsumed as an interface contract. What about the possible errors? Well, maybe i have to expect all HTTP response codes. At least the common one (404 etc.). In practice, only a smal subset of response codes will be supported. But how do you tell the clients, which ones? Are response codes not part of the "contract". Or are even all response part implicit part of the contract? Jakob
On Wed, Feb 9, 2011 at 10:45 AM, Jakob Strauch <jakob.strauch@...> wrote: > > > The Visibility of an interface´s properties is an important aspect to code > against it. In theory i need only a single starting URL to "explore" a > RESTful system. HEAD, OPTIONS can be used to retrieve supported HTTP > operations, the mediatype, supported security mechanismens. This can > subsumed as an interface contract. > > What about the possible errors? Well, maybe i have to expect all HTTP > response codes. At least the common one (404 etc.). In practice, only a smal > subset of response codes will be supported. But how do you tell the clients, > which ones? Are response codes not part of the "contract". Or are even all > response part implicit part of the contract? > > If you are accessing your REST service across HTTP, a robust client should be prepared to handle *any* of the defined HTTP status codes, not just the ones that might be documented by the service you're contacting. After all, it's not just the application that can return such errors -- for example, think of a 504 (Gateway Timeout) returned by an intermediate proxy server that is overloaded. In practice, you certainly want to do whatever you can to respect the semantics of commonly used status codes, but it is a wild world out there, and pretty much anything is possible. Jakob > > Craig > >
On Feb 9, 2011, at 7:45 PM, Jakob Strauch wrote: > > What about the possible errors? Well, maybe i have to expect all HTTP response codes. Yep! HTTP *is* the interface. You have to expect all of it. > At least the common one (404 etc.). In practice, only a smal subset of response codes will be supported. Why? And besides - who knows what intermediaries sit in the middle and produce responses the service developer never dreamed of? > But how do you tell the clients, which ones? RFC2616 tells them. There is no service specific description in a RESTful system. > Are response codes not part of the "contract". The contract is HTTP. No more no less. That is what clients have to understand. Jan P.S. However, in order to develop clients, you must have some sort of knowledge about the media types and link relations to expect. That is what the global registry (IANA) is for. If you apply REST inside the enterprise, an enterprise-global registry will suffice, meaning you do not necessarily have to register all your media types with IANA....but you can, of course. > Or are even all response part implicit part of the contract? > > > Jakob > > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
> Hmm - which investment are you talking about? > Are you saying that you think it is 'cheaper' to define a bunch of XMLs (one per service) and a bunch of service descriptions (maybe one WADL per service) that defining one (or very few) domain-related (and *not* service specific) media types? I define entities (and link relations) and let code generators do the dirty work. in early development iterations i use a common media type, later specific media types (if not applicable in the beginning) i don´t use WADL. but don´t you need some kind of service descriptions anyway? (though all "self-descriptnives") Jakob > > > > But thats for my own (business) scenario, where i control the clients, too. > > > > By the way, i want to thank all participants in this disscussion group for their valuable postings. I´m following this group for some months. > > > > > > Cheers, > > jakob > > > > --- In rest-discuss@yahoogroups.com, Bob Ferris <zazi@> wrote: > >> > >> Am 05.02.2011 17:26, schrieb Jakob Strauch: > >>> > >>> [1] http://nordsc.com/ext/classification_of_http_based_apis.html > >> > >> While rereading the referenced classification from above, I stumbled > >> about a few issues: > >> > >> 1. When describing a domain it's often not a main issue of missing media > >> types. I think, one would rather quickly find an appropriated generic > >> media type. However, the issue of describing a domain lies not only at > >> the representation and process model level (which is generally also > >> independent from the representation level, or?), the description of the > >> domain itself is thereby very important - modelling the concepts of for > >> instance 'user', 'order', 'offer' etc. > >> This is for me exactly the point where Semantic Web knowledge > >> representation languages on top of RDF Model can come into play. So I > >> can still use a generic media type, i.e. RDF Model, for realizing a > >> common description. However, serialize these descriptions into specific > >> representation media types, i.e. XHTML+RDFa, and thereby maybe also > >> extending the process model (based on a general description). Layering > >> of media types was already propagated by Roy T. Fielding, or? > >> All in all, I think, it depends more on the degree of existing > >> appropriated Semantic Web ontologies to model (parts of) a domain, > >> rather then on existing media types. While there can (theoretically) > >> exist a huge variety of both and such a huge amount would decrease the > >> simplicity property in both cases; I nevertheless think, that it is > >> maybe better to have less media types and more (especially reusable) > >> Semantic Web ontologies, rather then an equal high amount of media types. > >> To summarize, I think the application, reutilization and (if needed) > >> creation of Semantic Web ontologies fit quite well for emphasizing the > >> desired properties that should be reached when implementing the REST > >> architectural style. The costs that have to be invested into good > >> ontology design maybe align with the costs of proper media type design. > >> You maybe still addressed this issue somehow, when saying "media type > >> (and link relation etc.) specifications". Anyway, I think, it might be > >> good to make this concern a bit more explicit. You might not explicitly > >> propagate the utilization of Semantic Web ontologies, but please make > >> aware of the general existing 'description level' (cf. [1]), which (from > >> my point of view) exists already, but is then often more implicit than > >> explicit available. > >> > >> 2. I think, fulfilling the hypermedia as the engine of application state > >> constraint is maybe still the hardest part. I cannot really imagine that > >> "a transition from HTTP-based Type || to REST at a later point in time, > >> however, is rather easy". Furthermore, "turning a HTTP-based Type || API > >> into a REST API might be as easy as deleting the API documentation" is a > >> bit paradox from my point of view. When I would remove the API > >> documentation than this application cannot really be an API any more, > >> or? When I would like to program against (?) something ( ;) ), I have to > >> know how I could do that. For instance a web browser do not really > >> program against (?) something. > >> I think the term 'REST API' might be a bit inappropriate here (I still > >> doubt that a implementation of service, which is fully REST compatible, > >> is possible). The given examples for REST are not really APIs, or? - > >> AtomPub is a protocol, OpenSearch a specification (collection of media > >> type specifications), "RESTifying Procurement" an approach of a > >> proof-of-concept REST "service"(?) (I couldn't really figure out the > >> current state of that project, however it looks quite interesting). > >> Although, the chosen descriptor is still 'REST' and not "REST API". So > >> one might conclude that this could be a bit inappropriated > >> classification, but the descriptions are explicitly suggesting the > >> application of REST principles on the implementation of (Web) services ;) > >> > >> 3. Just a small issue: better "REST might be the best solution", rather > >> than "REST is the best solution". I guess, 'is' requires a kind of > >> proof, or? Could we really provide a complete proof about this? - I > >> currently don't think so. > >> > >> > >> That's all for the moment. > >> > >> Cheers, > >> > >> > >> Bob > >> > >> [1] > >> http://infoserviceonto.smiy.org/2010/11/25/on-resources-information-resources-and-documents/ > >> > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > >
On Feb 9, 2011, at 8:09 PM, Jakob Strauch wrote: >> Hmm - which investment are you talking about? >> Are you saying that you think it is 'cheaper' to define a bunch of XMLs (one per service) and a bunch of service descriptions (maybe one WADL per service) that defining one (or very few) domain-related (and *not* service specific) media types? > > I define entities (and link relations) and let code generators do the dirty work. in early development iterations i use a common media type, later specific media types (if not applicable in the beginning) Sounds ok - but note that normally, there is no business entity <-> representation mapping. Think of representations more in terms of the input and output of the service boundary (you resources). > > i don´t use WADL. but don´t you need some kind of service descriptions anyway? NO! NO! But do not confuse 'service description' with knowledge/definitions of the kinds of services you have. If you want to purchase a book, you do not direct your browser to *any* HTTP service but to one that you happen to know is selling books. Service type/kind though is completely orthogonal to the technical means of how software components interact with a service. Your, e.g., inventory service has the nature of being an inventory service (how ever your org defines that). At the same time, it might well have a RESt interface, an event based interface and a WS-* interface. Your RESTful client needs to know two things: a) that it wants to interact with *that* service - this is configuration (e.g. 'connect to a service of type 'inventory')[1] b) the media type(s) to expect and how to use them Jan [1] Maybe implemented like this: http://www.infoq.com/articles/rest-discovery-dns > (though all "self-descriptnives") > > > Jakob > > > > > > > >> >> >>> But thats for my own (business) scenario, where i control the clients, too. >>> >>> By the way, i want to thank all participants in this disscussion group for their valuable postings. I´m following this group for some months. >>> >>> >>> Cheers, >>> jakob >>> >>> --- In rest-discuss@yahoogroups.com, Bob Ferris <zazi@> wrote: >>>> >>>> Am 05.02.2011 17:26, schrieb Jakob Strauch: >>>>> >>>>> [1] http://nordsc.com/ext/classification_of_http_based_apis.html >>>> >>>> While rereading the referenced classification from above, I stumbled >>>> about a few issues: >>>> >>>> 1. When describing a domain it's often not a main issue of missing media >>>> types. I think, one would rather quickly find an appropriated generic >>>> media type. However, the issue of describing a domain lies not only at >>>> the representation and process model level (which is generally also >>>> independent from the representation level, or?), the description of the >>>> domain itself is thereby very important - modelling the concepts of for >>>> instance 'user', 'order', 'offer' etc. >>>> This is for me exactly the point where Semantic Web knowledge >>>> representation languages on top of RDF Model can come into play. So I >>>> can still use a generic media type, i.e. RDF Model, for realizing a >>>> common description. However, serialize these descriptions into specific >>>> representation media types, i.e. XHTML+RDFa, and thereby maybe also >>>> extending the process model (based on a general description). Layering >>>> of media types was already propagated by Roy T. Fielding, or? >>>> All in all, I think, it depends more on the degree of existing >>>> appropriated Semantic Web ontologies to model (parts of) a domain, >>>> rather then on existing media types. While there can (theoretically) >>>> exist a huge variety of both and such a huge amount would decrease the >>>> simplicity property in both cases; I nevertheless think, that it is >>>> maybe better to have less media types and more (especially reusable) >>>> Semantic Web ontologies, rather then an equal high amount of media types. >>>> To summarize, I think the application, reutilization and (if needed) >>>> creation of Semantic Web ontologies fit quite well for emphasizing the >>>> desired properties that should be reached when implementing the REST >>>> architectural style. The costs that have to be invested into good >>>> ontology design maybe align with the costs of proper media type design. >>>> You maybe still addressed this issue somehow, when saying "media type >>>> (and link relation etc.) specifications". Anyway, I think, it might be >>>> good to make this concern a bit more explicit. You might not explicitly >>>> propagate the utilization of Semantic Web ontologies, but please make >>>> aware of the general existing 'description level' (cf. [1]), which (from >>>> my point of view) exists already, but is then often more implicit than >>>> explicit available. >>>> >>>> 2. I think, fulfilling the hypermedia as the engine of application state >>>> constraint is maybe still the hardest part. I cannot really imagine that >>>> "a transition from HTTP-based Type || to REST at a later point in time, >>>> however, is rather easy". Furthermore, "turning a HTTP-based Type || API >>>> into a REST API might be as easy as deleting the API documentation" is a >>>> bit paradox from my point of view. When I would remove the API >>>> documentation than this application cannot really be an API any more, >>>> or? When I would like to program against (?) something ( ;) ), I have to >>>> know how I could do that. For instance a web browser do not really >>>> program against (?) something. >>>> I think the term 'REST API' might be a bit inappropriate here (I still >>>> doubt that a implementation of service, which is fully REST compatible, >>>> is possible). The given examples for REST are not really APIs, or? - >>>> AtomPub is a protocol, OpenSearch a specification (collection of media >>>> type specifications), "RESTifying Procurement" an approach of a >>>> proof-of-concept REST "service"(?) (I couldn't really figure out the >>>> current state of that project, however it looks quite interesting). >>>> Although, the chosen descriptor is still 'REST' and not "REST API". So >>>> one might conclude that this could be a bit inappropriated >>>> classification, but the descriptions are explicitly suggesting the >>>> application of REST principles on the implementation of (Web) services ;) >>>> >>>> 3. Just a small issue: better "REST might be the best solution", rather >>>> than "REST is the best solution". I guess, 'is' requires a kind of >>>> proof, or? Could we really provide a complete proof about this? - I >>>> currently don't think so. >>>> >>>> >>>> That's all for the moment. >>>> >>>> Cheers, >>>> >>>> >>>> Bob >>>> >>>> [1] >>>> http://infoserviceonto.smiy.org/2010/11/25/on-resources-information-resources-and-documents/ >>>> >>> >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Jakob: HTTP Response codes are protocol-level elements. All protocol-compliant clients need to be coded to recognize and respond appropriately to them per the RFC. Most protocol-level response codes allow for servers to also return entity bodies that contain *application-level* information. Clients that are coded to understand the application-level protocol will be responsible for understanding these particular entity bodies, just as they understand the "non-error" entity bodies. Coding application-level protocols detail is not covered in REST or the HTTP RFC. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Wed, Feb 9, 2011 at 13:55, Jan Algermissen <algermissen1971@...> wrote: > > On Feb 9, 2011, at 7:45 PM, Jakob Strauch wrote: > >> >> What about the possible errors? Well, maybe i have to expect all HTTP response codes. > > Yep! HTTP *is* the interface. You have to expect all of it. > >> At least the common one (404 etc.). In practice, only a smal subset of response codes will be supported. > > Why? And besides - who knows what intermediaries sit in the middle and produce responses the service developer never dreamed of? > > >> But how do you tell the clients, which ones? > > RFC2616 tells them. There is no service specific description in a RESTful system. > >> Are response codes not part of the "contract". > > The contract is HTTP. No more no less. That is what clients have to understand. > > Jan > > P.S. However, in order to develop clients, you must have some sort of knowledge about the media types and link relations to expect. That is what the global registry (IANA) is for. If you apply REST inside the enterprise, an enterprise-global registry will suffice, meaning you do not necessarily have to register all your media types with IANA....but you can, of course. > > > > > > >> Or are even all response part implicit part of the contract? >> >> >> Jakob >> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Feb 9, 2011, at 8:38 PM, mike amundsen wrote: > Jakob: > > HTTP Response codes are protocol-level elements. All > protocol-compliant clients need to be coded to recognize and respond > appropriately to them per the RFC. > > Most protocol-level response codes allow for servers to also return > entity bodies that contain *application-level* information. Clients > that are coded to understand the application-level protocol will be > responsible for understanding these particular entity bodies, just as > they understand the "non-error" entity bodies. > > Coding application-level protocols detail is not covered in REST or > the HTTP RFC. Good comment, Mike. Jakob, note that the body you receive in the case of 4xx also contributes to application state. 4xx does not mean that the communication / the application failed. It just means that the intended interaction result was not achieved. The body tells you what to do next. If your automated user agent understands the media type of that body, it can probably take a sensible action. Even something simple as sending the body of e.g. a 406 response to the IT helpdesk with an incident report is still WAY BETTER than an RPC call that just dies upon you. Jan > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2010 > http://rest-fest.googlecode.com > > > > > On Wed, Feb 9, 2011 at 13:55, Jan Algermissen <algermissen1971@...> wrote: >> >> On Feb 9, 2011, at 7:45 PM, Jakob Strauch wrote: >> >>> >>> What about the possible errors? Well, maybe i have to expect all HTTP response codes. >> >> Yep! HTTP *is* the interface. You have to expect all of it. >> >>> At least the common one (404 etc.). In practice, only a smal subset of response codes will be supported. >> >> Why? And besides - who knows what intermediaries sit in the middle and produce responses the service developer never dreamed of? >> >> >>> But how do you tell the clients, which ones? >> >> RFC2616 tells them. There is no service specific description in a RESTful system. >> >>> Are response codes not part of the "contract". >> >> The contract is HTTP. No more no less. That is what clients have to understand. >> >> Jan >> >> P.S. However, in order to develop clients, you must have some sort of knowledge about the media types and link relations to expect. That is what the global registry (IANA) is for. If you apply REST inside the enterprise, an enterprise-global registry will suffice, meaning you do not necessarily have to register all your media types with IANA....but you can, of course. >> >> >> >> >> >> >>> Or are even all response part implicit part of the contract? >>> >>> >>> Jakob >>> >>> >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >>
> Jakob, note that the body you receive in the case of 4xx also > contributes to application state. Ah yes, this sounds plausible. In "application" error cases i already send some detail information within the body. But i didn´t think of using also link relations inside the answer, as i´m using already in 2xx cases. (The RPC style is very present in a programer´s mind :-) ) This leads me to the question, if some generic media type exists for an application error state, and if it makes sense to provide/use/invent one at all? This could be a good starting point for designing application (fault) state... Jakob --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Feb 9, 2011, at 8:38 PM, mike amundsen wrote: > > > Jakob: > > > > HTTP Response codes are protocol-level elements. All > > protocol-compliant clients need to be coded to recognize and respond > > appropriately to them per the RFC. > > > > Most protocol-level response codes allow for servers to also return > > entity bodies that contain *application-level* information. Clients > > that are coded to understand the application-level protocol will be > > responsible for understanding these particular entity bodies, just as > > they understand the "non-error" entity bodies. > > > > Coding application-level protocols detail is not covered in REST or > > the HTTP RFC. > > Good comment, Mike. > > Jakob, note that the body you receive in the case of 4xx also contributes to application state. 4xx does not mean that the communication / the application failed. It just means that the intended interaction result was not achieved. The body tells you what to do next. If your automated user agent understands the media type of that body, it can probably take a sensible action. > > Even something simple as sending the body of e.g. a 406 response to the IT helpdesk with an incident report is still WAY BETTER than an RPC call that just dies upon you. > > Jan > > > > > > > > > mca > > http://amundsen.com/blog/ > > http://twitter.com@mamund > > http://mamund.com/foaf.rdf#me > > > > > > #RESTFest 2010 > > http://rest-fest.googlecode.com > > > > > > > > > > On Wed, Feb 9, 2011 at 13:55, Jan Algermissen <algermissen1971@...> wrote: > >> > >> On Feb 9, 2011, at 7:45 PM, Jakob Strauch wrote: > >> > >>> > >>> What about the possible errors? Well, maybe i have to expect all HTTP response codes. > >> > >> Yep! HTTP *is* the interface. You have to expect all of it. > >> > >>> At least the common one (404 etc.). In practice, only a smal subset of response codes will be supported. > >> > >> Why? And besides - who knows what intermediaries sit in the middle and produce responses the service developer never dreamed of? > >> > >> > >>> But how do you tell the clients, which ones? > >> > >> RFC2616 tells them. There is no service specific description in a RESTful system. > >> > >>> Are response codes not part of the "contract". > >> > >> The contract is HTTP. No more no less. That is what clients have to understand. > >> > >> Jan > >> > >> P.S. However, in order to develop clients, you must have some sort of knowledge about the media types and link relations to expect. That is what the global registry (IANA) is for. If you apply REST inside the enterprise, an enterprise-global registry will suffice, meaning you do not necessarily have to register all your media types with IANA....but you can, of course. > >> > >> > >> > >> > >> > >> > >>> Or are even all response part implicit part of the contract? > >>> > >>> > >>> Jakob > >>> > >>> > >>> > >>> > >>> > >>> ------------------------------------ > >>> > >>> Yahoo! Groups Links > >>> > >>> > >>> > >> > >> > >> > >> ------------------------------------ > >> > >> Yahoo! Groups Links > >> > >> > >> > >> >
Hi Jakob, --- In rest-discuss@yahoogroups.com, "Jakob Strauch" <jakob.strauch@...> wrote: > This leads me to the question, if some generic media type exists > for an application error state, and if it makes sense to provide/use > /invent one at all? This could be a good starting point for > designing application (fault) state... What about utilizing RDF Model (and its related generic media type for, erm, _resource descriptions_) and Semantic Web knowledge representation languages and appropriated domain-specific ontologies on top of RDF Model for that issue. The HTTP Vocabulary[1] might be a good starting point here. Cheers, Bob PS: Please, don't get me wrong here. However, I'm seeing all the time here people asking for media types that are appropriated for describing specific issues. With RDF we have already such a generic media type for describing resources. The task we should do instead, is looking for appropriated Semantic Web ontologies and utilize them for our (instance) descriptions. And if we really won't find an appropriated Semantic Web ontology that matches our specific purpose, then (and only then) we might have to create a new Semantic Web ontology. [1] http://www.w3.org/2006/http#
Jakob: There is no need for a complete media-type dedicated to representing application-level error information. Every media-type should have a way to represent these app-level errors. Subbu Allamarju has some good guidance (that will work w/ any media type) on representing errors[1] in his RESTFul Web Services Cookbook. [1] http://my.safaribooksonline.com/book/web-development/web-services/9780596809140/designing-representations/recipe-how-to-return-errors mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Thu, Feb 10, 2011 at 07:07, Jakob Strauch <jakob.strauch@...> wrote: >> Jakob, note that the body you receive in the case of 4xx also >> contributes to application state. > > Ah yes, this sounds plausible. In "application" error cases i already send some detail information within the body. But i didn´t think of using also link relations inside the answer, as i´m using already in 2xx cases. (The RPC style is very present in a programer´s mind :-) ) > > This leads me to the question, if some generic media type exists for an application error state, and if it makes sense to provide/use/invent one at all? This could be a good starting point for designing application (fault) state... > > > Jakob > > > --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: >> >> >> On Feb 9, 2011, at 8:38 PM, mike amundsen wrote: >> >> > Jakob: >> > >> > HTTP Response codes are protocol-level elements. All >> > protocol-compliant clients need to be coded to recognize and respond >> > appropriately to them per the RFC. >> > >> > Most protocol-level response codes allow for servers to also return >> > entity bodies that contain *application-level* information. Clients >> > that are coded to understand the application-level protocol will be >> > responsible for understanding these particular entity bodies, just as >> > they understand the "non-error" entity bodies. >> > >> > Coding application-level protocols detail is not covered in REST or >> > the HTTP RFC. >> >> Good comment, Mike. >> >> Jakob, note that the body you receive in the case of 4xx also contributes to application state. 4xx does not mean that the communication / the application failed. It just means that the intended interaction result was not achieved. The body tells you what to do next. If your automated user agent understands the media type of that body, it can probably take a sensible action. >> >> Even something simple as sending the body of e.g. a 406 response to the IT helpdesk with an incident report is still WAY BETTER than an RPC call that just dies upon you. >> >> Jan >> >> >> >> >> >> > >> > mca >> > http://amundsen.com/blog/ >> > http://twitter.com@mamund >> > http://mamund.com/foaf.rdf#me >> > >> > >> > #RESTFest 2010 >> > http://rest-fest.googlecode.com >> > >> > >> > >> > >> > On Wed, Feb 9, 2011 at 13:55, Jan Algermissen <algermissen1971@...> wrote: >> >> >> >> On Feb 9, 2011, at 7:45 PM, Jakob Strauch wrote: >> >> >> >>> >> >>> What about the possible errors? Well, maybe i have to expect all HTTP response codes. >> >> >> >> Yep! HTTP *is* the interface. You have to expect all of it. >> >> >> >>> At least the common one (404 etc.). In practice, only a smal subset of response codes will be supported. >> >> >> >> Why? And besides - who knows what intermediaries sit in the middle and produce responses the service developer never dreamed of? >> >> >> >> >> >>> But how do you tell the clients, which ones? >> >> >> >> RFC2616 tells them. There is no service specific description in a RESTful system. >> >> >> >>> Are response codes not part of the "contract". >> >> >> >> The contract is HTTP. No more no less. That is what clients have to understand. >> >> >> >> Jan >> >> >> >> P.S. However, in order to develop clients, you must have some sort of knowledge about the media types and link relations to expect. That is what the global registry (IANA) is for. If you apply REST inside the enterprise, an enterprise-global registry will suffice, meaning you do not necessarily have to register all your media types with IANA....but you can, of course. >> >> >> >> >> >> >> >> >> >> >> >> >> >>> Or are even all response part implicit part of the contract? >> >>> >> >>> >> >>> Jakob >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> ------------------------------------ >> >>> >> >>> Yahoo! Groups Links >> >>> >> >>> >> >>> >> >> >> >> >> >> >> >> ------------------------------------ >> >> >> >> Yahoo! Groups Links >> >> >> >> >> >> >> >> >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
My team is developing a RESTful API into a business collaboration application that you can think of as Facebook for business. There are users, groups, and news feeds for users and groups. There are also business related objects of many types. Interested in getting your opinion on a design question.
When you think about getting or posting content to a news feed for a user or group, which of the below is more natural?
Option A
/feeds/entity/:id/items # to post, delete feed items or get
# group or business object related feed items
/feeds/user/:user_id/items # to get/post/delete feed items relating to a user
/users/:id # to get user profile
/users/:id/followers # to get/add followers
... there are lots of other child resources under users
/groups/:id
/groups/:id/members # to get/add followers
/object/:id # get info about the object
OR
Option B
/users/:id/feeds/items # to get/post/delete feed data
/groups/:id/feeds/items # to get/post/delete feed data
/users/:id # to get user profile
/users/:id/followers # to get/add followers
... lots of other child resources under users
/groups/:id
/groups/:id/members # to get/add followers
/object/:id/feed/items # get a feed related to an object
/object/:id # get info about the object
The advantage of A is that you can use the same URL for all posts (but not for reads because users have different types of feeds than groups or the various business objects). The advantage of the latter is it shows the type of parent the feed belongs to and may be used for reads or writes or the other actions that live under each type of parent (like users or groups).
2011/2/14 logan_henriquez <logan_henriquez@...> > > > My team is developing a RESTful API into a business collaboration > application that you can think of as Facebook for business. There are users, > groups, and news feeds for users and groups. There are also business related > objects of many types. Interested in getting your opinion on a design > question. > > When you think about getting or posting content to a news feed for a user > or group, which of the below is more natural? > > Option A > > /feeds/entity/:id/items # to post, delete feed items or get > # group or business object related feed items > /feeds/user/:user_id/items # to get/post/delete feed items relating to a > user > /users/:id # to get user profile > /users/:id/followers # to get/add followers > ... there are lots of other child resources under users > /groups/:id > /groups/:id/members # to get/add followers > /object/:id # get info about the object > > OR > > Option B > > /users/:id/feeds/items # to get/post/delete feed data > /groups/:id/feeds/items # to get/post/delete feed data > /users/:id # to get user profile > /users/:id/followers # to get/add followers > ... lots of other child resources under users > /groups/:id > /groups/:id/members # to get/add followers > /object/:id/feed/items # get a feed related to an object > /object/:id # get info about the object > > The advantage of A is that you can use the same URL for all posts (but not > for reads because users have different types of feeds than groups or the > various business objects). The advantage of the latter is it shows the type > of parent the feed belongs to and may be used for reads or writes or the > other actions that live under each type of parent (like users or groups). > Design your URLs thinking at your entities as an implicit hierarchy, mostly derived from your domain, That should solve most of your doubts :) > > > -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
A good source for guidance on creating REST interfaces can be found on Roy Fielding's blog: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven <http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven>Also, URI construction is not a concern of REST and actually can be an anti-pattern for RESTful implementations as you'll end up Aw/ strong coupling between clients and the URI. This can make it possibly problematic to modify the server's handling of URIs, move the server to a new location, rev the underlying foundation connector framework, etc. Another approach is to focus on the state-transfers need to support your planned interactions, map that to hypermedia controls within a media type, and le the resources identify themselves in the process. <!-- get a feed --> <a href="..." rel="feed" /> <!-- get a feed item --> <a href="..." rel="item" /> <!-- get user profile --> <a href="..." rel="user-profile" /> <!-- add follower --> <form action="..." method="post" class="followers"> <input type="hidden" name="this-user" value="..." /> <input type="text" name="follower-to-add" value="..." /> </form> By defining the state interactions in some media-type (I picked HTML, but there are many others including designing your own), you end of focusing on the important elements (exposing resources, exposing data elements to send to the server, etc.). You can then program clients to recognize the hypermedia controls in the message and act accordingly (i.e. prompt users to fill in values, invoke the proper protocol method to send the state to the server, etc.). And you can program the server to recognize the incoming state transfers, evaluate them for problem, process them, and return the proper response (including resource representations containing the resulting data, etc.). While this takes more up-front planning, it frees you to use any URI or resource model you wish in order to achieve your goal. You can even change the URIs/resources along the way w/o invalidating any client code since clients are looking for hypermedia in responses, not URIs. I am working on an experiment related to this mode of RESTful implementation[1]. As luck would have it, the example I chose looks a bit like yours, too![2] This experiment is based on some recent work I've done but is still very new. Feel free to scan that document and ask any questions you might have. Hopefully this will give you some ideas. [1] http://amundsen.com/blog/archives/1093 [2] http://amundsen.com/hypermedia/profiles/ mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Mon, Feb 14, 2011 at 03:46, Alessandro Nadalin < alessandro.nadalin@...> wrote: > > > > > 2011/2/14 logan_henriquez <logan_henriquez@...> > > >> >> My team is developing a RESTful API into a business collaboration >> application that you can think of as Facebook for business. There are users, >> groups, and news feeds for users and groups. There are also business related >> objects of many types. Interested in getting your opinion on a design >> question. >> >> When you think about getting or posting content to a news feed for a user >> or group, which of the below is more natural? >> >> Option A >> >> /feeds/entity/:id/items # to post, delete feed items or get >> # group or business object related feed items >> /feeds/user/:user_id/items # to get/post/delete feed items relating to a >> user >> /users/:id # to get user profile >> /users/:id/followers # to get/add followers >> ... there are lots of other child resources under users >> /groups/:id >> /groups/:id/members # to get/add followers >> /object/:id # get info about the object >> >> OR >> >> Option B >> >> /users/:id/feeds/items # to get/post/delete feed data >> /groups/:id/feeds/items # to get/post/delete feed data >> /users/:id # to get user profile >> /users/:id/followers # to get/add followers >> ... lots of other child resources under users >> /groups/:id >> /groups/:id/members # to get/add followers >> /object/:id/feed/items # get a feed related to an object >> /object/:id # get info about the object >> >> The advantage of A is that you can use the same URL for all posts (but not >> for reads because users have different types of feeds than groups or the >> various business objects). The advantage of the latter is it shows the type >> of parent the feed belongs to and may be used for reads or writes or the >> other actions that live under each type of parent (like users or groups). >> > > Design your URLs thinking at your entities as an implicit hierarchy, mostly > derived from your domain, > That should solve most of your doubts :) > > >> >> > > > -- > Nadalin Alessandro > www.odino.org > www.twitter.com/_odino_ > > > >
I would like to challenge all the REST experts out there to design the standard petstore blueprint in REST. REST has been around for a while, but there doesn't seem to be a good collection of best practices or guiding principals for designing a proper RESTful service. Make it pure REST (URI's and requests/responses) and either XML or JSON as the basic content type. If I am wrong on the collection of best practices and guiding principals, feel free to correct me and point me to them. Thanks.
Chuck, On Feb 19, 2011, at 1:50 AM, Chuck C wrote: > I would like to challenge all the REST experts out there to design the standard petstore blueprint in REST. REST has been around for a while, but there doesn't seem to be a good collection of best practices or guiding principals for designing a proper RESTful service. Make it pure REST (URI's and requests/responses) and either XML or JSON as the basic content type. > That's a great idea - can you provide an initial design? Jan > If I am wrong on the collection of best practices and guiding principals, feel free to correct me and point me to them. > > Thanks. > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Sounds good. Do you have sponsorship money to cover the development, testing and publication cost? -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Chuck C Sent: 19 February 2011 00:50 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Pet Store Challenge I would like to challenge all the REST experts out there to design the standard petstore blueprint in REST. REST has been around for a while, but there doesn't seem to be a good collection of best practices or guiding principals for designing a proper RESTful service. Make it pure REST (URI's and requests/responses) and either XML or JSON as the basic content type. If I am wrong on the collection of best practices and guiding principals, feel free to correct me and point me to them. Thanks. ------------------------------------ Yahoo! Groups Links
Sebastien Lambla wrote: > Sounds good. Do you have sponsorship money to cover the development, testing and publication cost? To me this sounds like the type of thing that should be in the public domain - although it's a shame there isn't one to compliment the dissertation.
Well, it would be great if it was, but without money to put in place a proper pet shop (aka pay for the development hours), it's going to be very difficult for people to commit to coding it. OSS only go as far... -----Original Message----- From: Nathan [mailto:nathan@...] Sent: 19 February 2011 17:44 To: Sebastien Lambla Cc: Chuck C; rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Pet Store Challenge Sebastien Lambla wrote: > Sounds good. Do you have sponsorship money to cover the development, testing and publication cost? To me this sounds like the type of thing that should be in the public domain - although it's a shame there isn't one to compliment the dissertation.
Chuck C’s pet-store challenge<http://tech.groups.yahoo.com/group/rest-discuss/message/17351> : > I would like to challenge all the REST experts out there to design the > standard petstore blueprint in REST. > Where would one find the standard pet-store blueprint? > REST has been around for a while, but there doesn't seem to be a good > collection of best practices or guiding principals for designing a proper > RESTful service. > I thought that “Architectural Styles and the Design of Network-based Software Architectures<http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm>†described a good collection of guiding constraints for designing a proper REST-based service. > Make it pure REST (URI's and requests/responses) and either XML or JSON as > the basic content type. > How could “pure REST†involve JSON when there is no standard content type that has JSON syntax and hypertext semantics? As for your requirement that the solution involve URIs, request, and responses, I really don’t know another way when dealing with HTTP as the application protocol. (Don’t include my mailbox as a recipient in public replies.)
Hi guys, I was just wondering if something I thought was the most correct way to handle ESI in a RESTful way. I'm just to represent some resources as webpages, ans some representations ( like an homepage ) are a simple meshup of various resources with ESI. The HTML looks like: https://gist.github.com/836822 Since the representations I want are some pieces of (X)HTML rendered in a global context I would define my own hypermedia format, something like vnd.truncatedXHTML+XHTML. That's because I want to validate the representations included with ESI ( with a custom DTD, obviously ) without having to include N <html><head><body> tags, given N as the number of resources I'm including with ESI. Am I missing something? Did I said some rubbish? :-) -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
Hi Alessandro, I think that's a decent idea, I've been mulling over this for a while.. By doing this you are increasing the visibility of the responses which will allow your ESI intermediaries to operate more efficiently. I can see three distinct types worth keeping visible: 1. Standard (X)HTML: which require no introspection as they contain no ESI controls, and should just pass through without being processed. Use a 'normal' identifier e.g. text/html etc. 2. ESI composite: A full html document which requires introspection and ESI processing. Use a specific identifier e.g. text/html;profile=esi-composite 3. ESI fragment: An html fragment/partial for embedding in another ESI document - whilst not a full html document, these may also require introspection as they can also be composites. Use a specific identifier e.g. text/html;profile=esi-fragment or application/vnd.esi-fragment+html These esi-specific identifiers may be worth standardizing publicly as an extension to the ESI protocol. Cheers, Mike On Mon, Feb 21, 2011 at 8:56 AM, Alessandro Nadalin <alessandro.nadalin@...> wrote: > Hi guys, > > I was just wondering if something I thought was the most correct way > to handle ESI in a RESTful way. > I'm just to represent some resources as webpages, ans some > representations ( like an homepage ) are a simple meshup of various > resources with ESI. > > The HTML looks like: https://gist.github.com/836822 > > Since the representations I want are some pieces of (X)HTML rendered > in a global context I would define my own hypermedia format, something > like vnd.truncatedXHTML+XHTML. > That's because I want to validate the representations included with > ESI ( with a custom DTD, obviously ) without having to include N > <html><head><body> tags, given N as the number of resources I'm > including with ESI. > > Am I missing something? Did I said some rubbish? :-) > > -- > Nadalin Alessandro > www.odino.org > www.twitter.com/_odino_ > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hi Mike, 2011/2/21 Mike Kelly <mike@...>: > Hi Alessandro, > > I think that's a decent idea, I've been mulling over this for a > while.. By doing this you are increasing the visibility of the > responses which will allow your ESI intermediaries to operate more > efficiently. > > I can see three distinct types worth keeping visible: > > 1. Standard (X)HTML: which require no introspection as they contain no > ESI controls, and should just pass through without being processed. > Use a 'normal' identifier e.g. text/html etc. yes > > 2. ESI composite: A full html document which requires introspection > and ESI processing. Use a specific identifier e.g. > text/html;profile=esi-composite right > > 3. ESI fragment: An html fragment/partial for embedding in another ESI > document - whilst not a full html document, these may also require > introspection as they can also be composites. Use a specific > identifier e.g. text/html;profile=esi-fragment or > application/vnd.esi-fragment+html Thanks, great hints. I should have thought about a mixed html/esi type. > > These esi-specific identifiers may be worth standardizing publicly as > an extension to the ESI protocol. You mean writing a DTD, or something similar? Is there a process to submit this? Any resource on how to decently write them? I've seen tons of examples of DTD or hypermedia formats specifications ( like the ones mentioned in Amundsen's blog ), but I'm not really sure about the proper way to propose them. Thanks, > > Cheers, > Mike > > On Mon, Feb 21, 2011 at 8:56 AM, Alessandro Nadalin > <alessandro.nadalin@...> wrote: >> Hi guys, >> >> I was just wondering if something I thought was the most correct way >> to handle ESI in a RESTful way. >> I'm just to represent some resources as webpages, ans some >> representations ( like an homepage ) are a simple meshup of various >> resources with ESI. >> >> The HTML looks like: https://gist.github.com/836822 >> >> Since the representations I want are some pieces of (X)HTML rendered >> in a global context I would define my own hypermedia format, something >> like vnd.truncatedXHTML+XHTML. >> That's because I want to validate the representations included with >> ESI ( with a custom DTD, obviously ) without having to include N >> <html><head><body> tags, given N as the number of resources I'm >> including with ESI. >> >> Am I missing something? Did I said some rubbish? :-) >> >> -- >> Nadalin Alessandro >> www.odino.org >> www.twitter.com/_odino_ >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
On Mon, Feb 21, 2011 at 12:05 PM, Alessandro Nadalin <alessandro.nadalin@...> wrote: > Hi Mike, > > 2011/2/21 Mike Kelly <mike@...>: >> Hi Alessandro, >> >> I think that's a decent idea, I've been mulling over this for a >> while.. By doing this you are increasing the visibility of the >> responses which will allow your ESI intermediaries to operate more >> efficiently. >> >> I can see three distinct types worth keeping visible: >> >> 1. Standard (X)HTML: which require no introspection as they contain no >> ESI controls, and should just pass through without being processed. >> Use a 'normal' identifier e.g. text/html etc. > > yes > >> >> 2. ESI composite: A full html document which requires introspection >> and ESI processing. Use a specific identifier e.g. >> text/html;profile=esi-composite > > right > >> >> 3. ESI fragment: An html fragment/partial for embedding in another ESI >> document - whilst not a full html document, these may also require >> introspection as they can also be composites. Use a specific >> identifier e.g. text/html;profile=esi-fragment or >> application/vnd.esi-fragment+html > > Thanks, great hints. > I should have thought about a mixed html/esi type. > >> >> These esi-specific identifiers may be worth standardizing publicly as >> an extension to the ESI protocol. > > You mean writing a DTD, or something similar? > Is there a process to submit this? Any resource on how to decently write them? > I meant standardize/specify: - The media type identifiers for documents that contain ESI stuff (i.e. composite and fragment) - The expected behaviour of ESI intermediaries against each of those identifiers If you illicit some feedback, finalize the details, and then document it publicly (i.e. publish it on the web) you've gone a decent way to 'standardising' the mechanism. In my opinion. Some would insist you *MUST* register the identifiers at the appropriate registries and publish your specs through the relevant standards bodies, because otherwise you will be Doing It Wrong, the scale of the internet will crush you, and your head will explode.. .. but if you can't be bothered, I understand. Just bear those consequences in mind, that's all I'm saying. This video may help you to figure it all out: http://williamstw.blogspot.com/2010/09/rest-and-self-descriptiveness.html Cheers, Mike
2011/2/21 Mike Kelly <mike@...>: >> > > I meant standardize/specify: > > - The media type identifiers for documents that contain ESI stuff > (i.e. composite and fragment) > - The expected behaviour of ESI intermediaries against each of those identifiers mmm ok, I gotta confess I'm a bit lost here, my fault. Did you mean writing a draft like http://www.w3.org/TR/xhtml-basic/? Or a DTD? Both? Just forgive my ignorance about this topic :-/ > > If you illicit some feedback, finalize the details, and then document > it publicly (i.e. publish it on the web) you've gone a decent way to > 'standardising' the mechanism. In my opinion. > > Some would insist you *MUST* register the identifiers at the > appropriate registries and publish your specs through the relevant > standards bodies, because otherwise you will be Doing It Wrong, the > scale of the internet will crush you, and your head will explode.. :) > > .. but if you can't be bothered, I understand. Just bear those > consequences in mind, that's all I'm saying. That's clear, I guess. No, II really care about doing it in the right way, so I'll go for the specification. > > This video may help you to figure it all out: > http://williamstw.blogspot.com/2010/09/rest-and-self-descriptiveness.html > > Cheers, > Mike > -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
Alessandro: FWIW, here's the pattern I have been following in cases where I design new media types: - create a basic design - use it in local implementations, work out any bugs/modifications as needed - if it seems to be a solid/useful design in your local implementations, work up a public web page with the design details [1] - solicit feedback, work out any bugs/modifications as needed - if it seems to be a solid/useful design based on feedback, register your design on the VND or PRS tree [2] - encourage others to implement solution w/ your design, work out bugs/modifications as needed - if it seems to be a solid/useful design for a "wide audience", work up a full RFC I-D [3] and work for registration on the standards tree This takes time but, in the end, is (IMO) a solid way to go. Along the way you get elevated levels of feedback and your designs have the opportunity to change over time to reach a wider audience and increase in value. Hopefully, this will give you some ideas on how to handle your own design process. [1] http://amundsen.com/media-types/maze/ [2] http://www.iana.org/cgi-bin/mediatypes.pl mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Mon, Feb 21, 2011 at 08:46, Alessandro Nadalin <alessandro.nadalin@...> wrote: > 2011/2/21 Mike Kelly <mike@...>: >>> >> >> I meant standardize/specify: >> >> - The media type identifiers for documents that contain ESI stuff >> (i.e. composite and fragment) >> - The expected behaviour of ESI intermediaries against each of those identifiers > > mmm ok, I gotta confess I'm a bit lost here, my fault. > > Did you mean writing a draft like http://www.w3.org/TR/xhtml-basic/? > Or a DTD? Both? > Just forgive my ignorance about this topic :-/ > >> >> If you illicit some feedback, finalize the details, and then document >> it publicly (i.e. publish it on the web) you've gone a decent way to >> 'standardising' the mechanism. In my opinion. >> >> Some would insist you *MUST* register the identifiers at the >> appropriate registries and publish your specs through the relevant >> standards bodies, because otherwise you will be Doing It Wrong, the >> scale of the internet will crush you, and your head will explode.. > > :) > >> >> .. but if you can't be bothered, I understand. Just bear those >> consequences in mind, that's all I'm saying. > > That's clear, I guess. > No, II really care about doing it in the right way, so I'll go for the > specification. > >> >> This video may help you to figure it all out: >> http://williamstw.blogspot.com/2010/09/rest-and-self-descriptiveness.html >> >> Cheers, >> Mike >> > > > > -- > Nadalin Alessandro > www.odino.org > www.twitter.com/_odino_ > > > ------------------------------------ > > Yahoo! Groups Links > > > >
2011/2/21 mike amundsen <mamund@...>: > Alessandro: > > FWIW, here's the pattern I have been following in cases where I design > new media types: > > - create a basic design > - use it in local implementations, work out any bugs/modifications as needed > - if it seems to be a solid/useful design in your local > implementations, work up a public web page with the design details [1] > - solicit feedback, work out any bugs/modifications as needed > - if it seems to be a solid/useful design based on feedback, register > your design on the VND or PRS tree [2] > - encourage others to implement solution w/ your design, work out > bugs/modifications as needed > - if it seems to be a solid/useful design for a "wide audience", work > up a full RFC I-D [3] and work for registration on the standards tree > > This takes time but, in the end, is (IMO) a solid way to go. Along the > way you get elevated levels of feedback and your designs have the > opportunity to change over time to reach a wider audience and increase > in value. > > Hopefully, this will give you some ideas on how to handle your own > design process. Thank you so much mike, you clarified a lot of doubts I had. > > [1] http://amundsen.com/media-types/maze/ > [2] http://www.iana.org/cgi-bin/mediatypes.pl > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2010 > http://rest-fest.googlecode.com > > > > > On Mon, Feb 21, 2011 at 08:46, Alessandro Nadalin > <alessandro.nadalin@...> wrote: >> 2011/2/21 Mike Kelly <mike@....uk>: >>>> >>> >>> I meant standardize/specify: >>> >>> - The media type identifiers for documents that contain ESI stuff >>> (i.e. composite and fragment) >>> - The expected behaviour of ESI intermediaries against each of those identifiers >> >> mmm ok, I gotta confess I'm a bit lost here, my fault. >> >> Did you mean writing a draft like http://www.w3.org/TR/xhtml-basic/? >> Or a DTD? Both? >> Just forgive my ignorance about this topic :-/ >> >>> >>> If you illicit some feedback, finalize the details, and then document >>> it publicly (i.e. publish it on the web) you've gone a decent way to >>> 'standardising' the mechanism. In my opinion. >>> >>> Some would insist you *MUST* register the identifiers at the >>> appropriate registries and publish your specs through the relevant >>> standards bodies, because otherwise you will be Doing It Wrong, the >>> scale of the internet will crush you, and your head will explode.. >> >> :) >> >>> >>> .. but if you can't be bothered, I understand. Just bear those >>> consequences in mind, that's all I'm saying. >> >> That's clear, I guess. >> No, II really care about doing it in the right way, so I'll go for the >> specification. >> >>> >>> This video may help you to figure it all out: >>> http://williamstw.blogspot.com/2010/09/rest-and-self-descriptiveness.html >>> >>> Cheers, >>> Mike >>> >> >> >> >> -- >> Nadalin Alessandro >> www.odino.org >> www.twitter.com/_odino_ >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
glad it helps. I forgot to include a ref link to the RFC process for registering a media-type in the standard tree. It is: http://tools.ietf.org/html/rfc4288 cheers. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com > On Mon, Feb 21, 2011 at 12:21, Alessandro Nadalin > <alessandro.nadalin@...> wrote: >> 2011/2/21 mike amundsen <mamund@...>: >>> Alessandro: >>> >>> FWIW, here's the pattern I have been following in cases where I design >>> new media types: >>> >>> - create a basic design >>> - use it in local implementations, work out any bugs/modifications as needed >>> - if it seems to be a solid/useful design in your local >>> implementations, work up a public web page with the design details [1] >>> - solicit feedback, work out any bugs/modifications as needed >>> - if it seems to be a solid/useful design based on feedback, register >>> your design on the VND or PRS tree [2] >>> - encourage others to implement solution w/ your design, work out >>> bugs/modifications as needed >>> - if it seems to be a solid/useful design for a "wide audience", work >>> up a full RFC I-D [3] and work for registration on the standards tree >>> >>> This takes time but, in the end, is (IMO) a solid way to go. Along the >>> way you get elevated levels of feedback and your designs have the >>> opportunity to change over time to reach a wider audience and increase >>> in value. >>> >>> Hopefully, this will give you some ideas on how to handle your own >>> design process. >> >> Thank you so much mike, you clarified a lot of doubts I had. >> >>> >>> [1] http://amundsen.com/media-types/maze/ >>> [2] http://www.iana.org/cgi-bin/mediatypes.pl >>> >>> mca >>> http://amundsen.com/blog/ >>> http://twitter.com@mamund >>> http://mamund.com/foaf.rdf#me >>> >>> >>> #RESTFest 2010 >>> http://rest-fest.googlecode.com >>> >>> >>> >>> >>> On Mon, Feb 21, 2011 at 08:46, Alessandro Nadalin >>> <alessandro.nadalin@...> wrote: >>>> 2011/2/21 Mike Kelly <mike@...>: >>>>>> >>>>> >>>>> I meant standardize/specify: >>>>> >>>>> - The media type identifiers for documents that contain ESI stuff >>>>> (i.e. composite and fragment) >>>>> - The expected behaviour of ESI intermediaries against each of those identifiers >>>> >>>> mmm ok, I gotta confess I'm a bit lost here, my fault. >>>> >>>> Did you mean writing a draft like http://www.w3.org/TR/xhtml-basic/? >>>> Or a DTD? Both? >>>> Just forgive my ignorance about this topic :-/ >>>> >>>>> >>>>> If you illicit some feedback, finalize the details, and then document >>>>> it publicly (i.e. publish it on the web) you've gone a decent way to >>>>> 'standardising' the mechanism. In my opinion. >>>>> >>>>> Some would insist you *MUST* register the identifiers at the >>>>> appropriate registries and publish your specs through the relevant >>>>> standards bodies, because otherwise you will be Doing It Wrong, the >>>>> scale of the internet will crush you, and your head will explode.. >>>> >>>> :) >>>> >>>>> >>>>> .. but if you can't be bothered, I understand. Just bear those >>>>> consequences in mind, that's all I'm saying. >>>> >>>> That's clear, I guess. >>>> No, II really care about doing it in the right way, so I'll go for the >>>> specification. >>>> >>>>> >>>>> This video may help you to figure it all out: >>>>> http://williamstw.blogspot.com/2010/09/rest-and-self-descriptiveness.html >>>>> >>>>> Cheers, >>>>> Mike >>>>> >>>> >>>> >>>> >>>> -- >>>> Nadalin Alessandro >>>> www.odino.org >>>> www.twitter.com/_odino_ >>>> >>>> >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>>> >>> >> >> >> >> -- >> Nadalin Alessandro >> www.odino.org >> www.twitter.com/_odino_ >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> >
2011/2/21 Mike Kelly <mike@...>: > Hi Alessandro, > > I think that's a decent idea, I've been mulling over this for a > while.. By doing this you are increasing the visibility of the > responses which will allow your ESI intermediaries to operate more > efficiently. > > I can see three distinct types worth keeping visible: > > 1. Standard (X)HTML: which require no introspection as they contain no > ESI controls, and should just pass through without being processed. > Use a 'normal' identifier e.g. text/html etc. > > 2. ESI composite: A full html document which requires introspection > and ESI processing. Use a specific identifier e.g. > text/html;profile=esi-composite > > 3. ESI fragment: An html fragment/partial for embedding in another ESI > document - whilst not a full html document, these may also require > introspection as they can also be composites. Use a specific > identifier e.g. text/html;profile=esi-fragment or > application/vnd.esi-fragment+html Just a question: why using a profile attribute over there? I don't understand it very well. What does the profile let you do? I' formalizinf some stuff today and I was thinking I could do something like: * application/vnd.xhesiml+xhtml ( for the whole page which includes ESI tags ) * application/vnd.xhesiml+xhtml;profile=fragment ( for a page fragment, which can include an ESI tag itself ) But I'm probably missing something. Hints? Is it a good approach? > > These esi-specific identifiers may be worth standardizing publicly as > an extension to the ESI protocol. > > Cheers, > Mike > > On Mon, Feb 21, 2011 at 8:56 AM, Alessandro Nadalin > <alessandro.nadalin@...> wrote: >> Hi guys, >> >> I was just wondering if something I thought was the most correct way >> to handle ESI in a RESTful way. >> I'm just to represent some resources as webpages, ans some >> representations ( like an homepage ) are a simple meshup of various >> resources with ESI. >> >> The HTML looks like: https://gist.github.com/836822 >> >> Since the representations I want are some pieces of (X)HTML rendered >> in a global context I would define my own hypermedia format, something >> like vnd.truncatedXHTML+XHTML. >> That's because I want to validate the representations included with >> ESI ( with a custom DTD, obviously ) without having to include N >> <html><head><body> tags, given N as the number of resources I'm >> including with ESI. >> >> Am I missing something? Did I said some rubbish? :-) >> >> -- >> Nadalin Alessandro >> www.odino.org >> www.twitter.com/_odino_ >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
Hi,
I seem to keep finding myself in a position where I find it difficult to choose the "right" level of restful-ness and granularity, and how to represent references/links, in web api cases where there is a batch oriented "legacy" already in place.
Since I haven't seen it discussed much in REST circles, I thought I'd see if I could generate some discussion/feedback on this type of scenario. At least I find it a challenge, so maybe others do as well, and have experiences they could share?
The context is one where there are existing and fairly well-structured and internally consistent XML standards for representing the domain and typically batch oriented/semi-manual processes that push chunks of large xml stuctures between systems. Leveraging this existing format and the technical infrastructure and processes surrounding it when implementing a more granular and flexible web api seem to be a necessity and a no-brainer. Especially as the batch processes are not going to go away any time soon, not having maintain two completely separate "api" infrastructures seem to be important.
But this approach poses some challenges. More specifically I find it being difficult to 1) choose the level of restful-ness and granularity to aim for, and 2) if I go for what I find the most "logical" and "serendipitous" fine granularity, I always seem to want to represent references or links differently in the two scenarios.
To move in some kind of resource orientation for major concepts in the domain is easy, these concepts are already defined in the xml and generally are easy to define. Reusing the xml fragments that represent a resource, would maybe simplified look something like (never mind the details :)
...
<dataSets>
<dataSet id="1">
<... lots of info... />
<dataElements>
<dataElementRef id="1"/>
</dataElements>
</dataSet>
</dataSets>
<dataElements>
<dataElement id="1">
<... lots of info... />
</dataElement>
</dataElements>
...
What I would immediately want to do in the api context would be something like having separate resources for dataSet and dataElement and give represent them with fragments like this:
<dataSet id="1" href="url" >
<... lots of info... />
<dataElements>
<dataElementRef id="1" href="url" name="Descriptive name suitable for link naming" />
</dataElements>
</dataSet>
<dataElement id="1" href="url">
<... lots of info... />
<dataSets>
<dataSet id="1" href="url" name="Descriptive name suitable for link naming" />
</dataSets>
</dataElement>
So basically I would want to
- use links as references
- it seems to be a good idea to include a minimal description of the resource for more human readable contexts (think javascript with a json-variant of this xml)
- add two way referencing
Does this make sense?
I am thinking this makes sense, and I should be able to append the schema with optional definitions of the added structure. It would break an existing parser validating against the old schema, but that should be possible to solve using some clever switching in the internal implementation that could drop these extra elements (and namespace declaration) when talking to legacy infrastructure.
But, the xml schema I would come up with would never the less not be the same as the common standard that is shared knowledge. For unknowing clients I could try to add mechanisms for "negotiating down" to the standard xml. But it could be difficult to know this for "unknown"/loosely coupled peer systems such that in the end the "standard" xml will end up being used most of the time. It would seem that the potential "serendipitous" benefits of adding hypermedia and two-way linking would quickly end up being a relatively high cost with little benefit. Would it be better to just use the xml as defined in the standard and not aim for as much restfulness?
Is it too granular?
I know this is really impossible to evaluate generally, but since all existing infrastructure are working on larger chunks of xml, is there even a point in splitting the domain into the "logical" resources in the web api. Might I just as well just move the existing "import/export" commands to the web as course grained rpc style methods and save myself the trouble of trying to combine a restful web api with legacy infrastructure that I won't get rid of anyway?
Or should I in the end just avoid trying to merge the two "apis" and let them work on separate optimized models, taking the double cost?
I guess what I am asking, are there patterns to this kind of challenge that can shed some light on how to approach it? It would seem to be a kind of challenge others should have met as well :)
Jo
If you served application/vnd.xhesiml+xhtml to a browser then the browser won't render it and will display a download prompt instead, so the reason I used a the profile parameter for full page composites was because it means an ESI intermediary would not have to rewrite the Content-Type header of the response. This is not an issue for fragments since the responses need never reach the client. Cheers, Mike On Tue, Feb 22, 2011 at 2:05 PM, Alessandro Nadalin <alessandro.nadalin@...> wrote: > 2011/2/21 Mike Kelly <mike@...>: >> Hi Alessandro, >> >> I think that's a decent idea, I've been mulling over this for a >> while.. By doing this you are increasing the visibility of the >> responses which will allow your ESI intermediaries to operate more >> efficiently. >> >> I can see three distinct types worth keeping visible: >> >> 1. Standard (X)HTML: which require no introspection as they contain no >> ESI controls, and should just pass through without being processed. >> Use a 'normal' identifier e.g. text/html etc. >> >> 2. ESI composite: A full html document which requires introspection >> and ESI processing. Use a specific identifier e.g. >> text/html;profile=esi-composite >> >> 3. ESI fragment: An html fragment/partial for embedding in another ESI >> document - whilst not a full html document, these may also require >> introspection as they can also be composites. Use a specific >> identifier e.g. text/html;profile=esi-fragment or >> application/vnd.esi-fragment+html > > Just a question: why using a profile attribute over there? I don't > understand it very well. What does the profile let you do? > > I' formalizinf some stuff today and I was thinking I could do something like: > * application/vnd.xhesiml+xhtml ( for the whole page which includes ESI tags ) > * application/vnd.xhesiml+xhtml;profile=fragment ( for a page > fragment, which can include an ESI tag itself ) > > But I'm probably missing something. Hints? Is it a good approach? > >> >> These esi-specific identifiers may be worth standardizing publicly as >> an extension to the ESI protocol. >> >> Cheers, >> Mike >> >> On Mon, Feb 21, 2011 at 8:56 AM, Alessandro Nadalin >> <alessandro.nadalin@...> wrote: >>> Hi guys, >>> >>> I was just wondering if something I thought was the most correct way >>> to handle ESI in a RESTful way. >>> I'm just to represent some resources as webpages, ans some >>> representations ( like an homepage ) are a simple meshup of various >>> resources with ESI. >>> >>> The HTML looks like: https://gist.github.com/836822 >>> >>> Since the representations I want are some pieces of (X)HTML rendered >>> in a global context I would define my own hypermedia format, something >>> like vnd.truncatedXHTML+XHTML. >>> That's because I want to validate the representations included with >>> ESI ( with a custom DTD, obviously ) without having to include N >>> <html><head><body> tags, given N as the number of resources I'm >>> including with ESI. >>> >>> Am I missing something? Did I said some rubbish? :-) >>> >>> -- >>> Nadalin Alessandro >>> www.odino.org >>> www.twitter.com/_odino_ >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >>> >> > > > > -- > Nadalin Alessandro > www.odino.org > www.twitter.com/_odino_ >
Alessandro Nadalin wrote: > > > 3. ESI fragment: An html fragment/partial for embedding in another > > ESI document - whilst not a full html document, these may also > > require introspection as they can also be composites. Use a specific > > identifier e.g. text/html;profile=esi-fragment or > > application/vnd.esi-fragment+html > > Just a question: why using a profile attribute over there? I don't > understand it very well. What does the profile let you do? > Well, first, the profile parameter must be defined for the media type in question -- which it isn't for text/html. Perhaps Mike meant application/xhtml+xml, which does define a profile parameter, for the specific purpose of implementing conformance levels. http://www.rfc-editor.org/rfc/rfc2854.txt See section 2, where the only parameter defined is 'charset'. -Eric
Further, no media type ends in '+xhtml' or '+html'. If your media type may be processed via XML as a fallback for those components which don't understand your media type, then the suffix is '+xml'. -Eric
> > "It's the Architecture, Stupid!" > Any ideas on what to call the opposite of this award? I have a first recipient in mind; I'll be spending some time this weekend checking out their architecture and highlighting its RESTful points... "Al Jazeera reported Web traffic to its site increased by 2,500 percent between Jan. 28 and Jan. 31, much of it from the United States." http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2011/02/20/INHD1HO6NG.DTL ...because, obviously, their architecture exhibits certain desirable properties we're targeting as REST developers. All the more impressive, considering those new visitors were mostly after video content. Is there even a term for something an order of magnitude greater than a mere slashdotting, in both scope and duration? I'd rather teach REST through positive reinforcement, by highlighting the rare site which doesn't collapse in a heap of smoking ruins when subjected to such massive, sustained traffic increases. I wish the linked article had cited a reference. -Eric
2011/2/22 Eric J. Bowman <eric@...>: > Further, no media type ends in '+xhtml' or '+html'. If your media type > may be processed via XML as a fallback for those components which don't > understand your media type, then the suffix is '+xml'. Is there a way I can find a sum up of this stuff? Thank you so much Eric, definitely great advices. > > -Eric > -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
Alessandro Nadalin wrote: > > Is there a way I can find a sum up of this stuff? > Summary? No. The relevant RFCs explain the syntax, but knowing that RFC 3023 is currently the only definition of any suffix (+xml) is just something one knows from experience, at this time. Reading RFCs 3236 and 3023 provides an example of one media type with a suffix defined by another media type. Sorry I don't have any direct answers for you, but I don't even know what ESI stands for, or anything. -Eric
Yes, it is the right thing to do. If ESI ever gets revved up (copying to Mark, who I think is interested based on some work we did internally), specifying a media type for page templates and fragments should happen. Subbu On Feb 21, 2011, at 12:56 AM, Alessandro Nadalin wrote: > Hi guys, > > I was just wondering if something I thought was the most correct way > to handle ESI in a RESTful way. > I'm just to represent some resources as webpages, ans some > representations ( like an homepage ) are a simple meshup of various > resources with ESI. > > The HTML looks like: https://gist.github.com/836822 > > Since the representations I want are some pieces of (X)HTML rendered > in a global context I would define my own hypermedia format, something > like vnd.truncatedXHTML+XHTML. > That's because I want to validate the representations included with > ESI ( with a custom DTD, obviously ) without having to include N > <html><head><body> tags, given N as the number of resources I'm > including with ESI. > > Am I missing something? Did I said some rubbish? :-) > > -- > Nadalin Alessandro > www.odino.org > www.twitter.com/_odino_ > > > ------------------------------------ > > Yahoo! Groups Links > > >
An idea that's been needling in my head for a while, just want to float it out there. This likely has been discussed before and I've just missed the thread. Suppose the following 1. you have a media type that describes how to interact with other resources. 2. you have resource http://MetaA that describes how to interact with a hyperlinked resource, http://RealA. 3. MetaA's representation has defined a very long cache TTL (say, 3 months) 4. A user agent that GETs the representation MetaA, caches the representation of MetaA, and prompts User 1 to manipulate MetaA's representation to configure integration of the user agent's overall application to representations that may be transferred to/from RealA 5. This same user agent is leveraged by User 2 over many months to interact with RealA given the cached MetaA. 6. If MetaA's representation changes, the User 1 is notified by the user agent (via some undefined mechanism) to change its configuration so that User 2 can continue to interact with RealA. Steps 1 - 3 are "design time", steps 4-5 are "runtime" in the traditional sense of the term; User 1 is a developer (or technical user) User 2 is an automaton (i.e. client code of some sort) Both MetaA and RealA's representations are part of the application state (aka. its hypermedia workspace), but the origin server(s) have set different expectations on how long they can be cached (MetaA for a long time, RealA for a short time). Or, if you're more concrete minded, imagine: - You write a scraping application that caches an HTML form to a stock quote service, which a developer then uses to generate URIs to GET a stock quote price. The application is smart enough to periodically GET the HTML form and can detect changes, but would require developer intervention in some cases, such as if the required bound form field names changed. My question: How is this not RESTful? (in the sense of it not being respectful of the constraints of the style and thus sacrificing some beneficial properties like scalability, evolvability, etc.) My proposition: 1. If this is not RESTful, we'll have to talk. 2. If this is RESTful, we need to be really careful when we say there's no "design time" in RESTful systems. I think the clearer statement is that "static code generation" (e.g. client stubs) is discouraged in properly RESTful systems, since it tends towards sucktacular rigidity in the user agent. Cheers Stu
gah, a typo "Steps 1 - 3 are "design time", steps 4-5 are "runtime" in the traditional sense of the term;" should read "Steps 1 - 4 are "design time", steps 5-6 are "runtime" in the traditional sense of the term;" ________________________________ From: Stuart Charlton <stuartcharlton@yahoo.com> To: rest-discuss@yahoogroups.com Sent: Tue, February 22, 2011 10:03:47 PM Subject: [rest-discuss] design vs runtime in representations An idea that's been needling in my head for a while, just want to float it out there. This likely has been discussed before and I've just missed the thread. Suppose the following 1. you have a media type that describes how to interact with other resources. 2. you have resource http://MetaA that describes how to interact with a hyperlinked resource, http://RealA. 3. MetaA's representation has defined a very long cache TTL (say, 3 months) 4. A user agent that GETs the representation MetaA, caches the representation of MetaA, and prompts User 1 to manipulate MetaA's representation to configure integration of the user agent's overall application to representations that may be transferred to/from RealA 5. This same user agent is leveraged by User 2 over many months to interact with RealA given the cached MetaA. 6. If MetaA's representation changes, the User 1 is notified by the user agent (via some undefined mechanism) to change its configuration so that User 2 can continue to interact with RealA. Steps 1 - 3 are "design time", steps 4-5 are "runtime" in the traditional sense of the term; User 1 is a developer (or technical user) User 2 is an automaton (i.e. client code of some sort) Both MetaA and RealA's representations are part of the application state (aka. its hypermedia workspace), but the origin server(s) have set different expectations on how long they can be cached (MetaA for a long time, RealA for a short time). Or, if you're more concrete minded, imagine: - You write a scraping application that caches an HTML form to a stock quote service, which a developer then uses to generate URIs to GET a stock quote price. The application is smart enough to periodically GET the HTML form and can detect changes, but would require developer intervention in some cases, such as if the required bound form field names changed. My question: How is this not RESTful? (in the sense of it not being respectful of the constraints of the style and thus sacrificing some beneficial properties like scalability, evolvability, etc.) My proposition: 1. If this is not RESTful, we'll have to talk. 2. If this is RESTful, we need to be really careful when we say there's no "design time" in RESTful systems. I think the clearer statement is that "static code generation" (e.g. client stubs) is discouraged in properly RESTful systems, since it tends towards sucktacular rigidity in the user agent. Cheers Stu
Sorry, this was sent a bit early, and I see it's hardly read worthy as it is. SInce it got sent in the first place, let me add a tl;dr version (so you can at least understand what I tried writing :) Context: There are already in place xml standards and deployed technical infrastructure for working with this xml for batch processing. Main question: Are there any patterns or "best practices" out there for evaluating how to go about making the "strategic" choices when building a "parallel" web api? The broad options I tried to discuss: - just move existing batch operations "as is" onto web (least work, but is clunky to work with for new clients and doesn't really encourage unexpected uses). - build a separate solution for web api (adds complexity and maintenance, but allows for optimized solutions and restfulness) - try to utilize the existing solutions, but try to webify them (what I think makes sense, at least in my immediate context, but poses some problems and I'm uncertain of the value) If choosing the last one, any experiences on how to go about trying to balance backwards compatibility and restfulness "for the future" (while not wasting resources unnecessary)? Jo Den 22. feb. 2011 kl. 16.07 skrev Jo Størset: > Hi, > > I seem to keep finding myself in a position where I find it difficult to choose the "right" level of restful-ness and granularity, and how to represent references/links, in web api cases where there is a batch oriented "legacy" already in place. > > Since I haven't seen it discussed much in REST circles, I thought I'd see if I could generate some discussion/feedback on this type of scenario. At least I find it a challenge, so maybe others do as well, and have experiences they could share? > > The context is one where there are existing and fairly well-structured and internally consistent XML standards for representing the domain and typically batch oriented/semi-manual processes that push chunks of large xml stuctures between systems. Leveraging this existing format and the technical infrastructure and processes surrounding it when implementing a more granular and flexible web api seem to be a necessity and a no-brainer. Especially as the batch processes are not going to go away any time soon, not having maintain two completely separate "api" infrastructures seem to be important. > > But this approach poses some challenges. More specifically I find it being difficult to 1) choose the level of restful-ness and granularity to aim for, and 2) if I go for what I find the most "logical" and "serendipitous" fine granularity, I always seem to want to represent references or links differently in the two scenarios. > > To move in some kind of resource orientation for major concepts in the domain is easy, these concepts are already defined in the xml and generally are easy to define. Reusing the xml fragments that represent a resource, would maybe simplified look something like (never mind the details :) > > ... > <dataSets> > <dataSet id="1"> > <... lots of info... /> > <dataElements> > <dataElementRef id="1"/> > </dataElements> > </dataSet> > </dataSets> > <dataElements> > <dataElement id="1"> > <... lots of info... /> > </dataElement> > </dataElements> > ... > > What I would immediately want to do in the api context would be something like having separate resources for dataSet and dataElement and give represent them with fragments like this: > > <dataSet id="1" href="url" > > <... lots of info... /> > <dataElements> > <dataElementRef id="1" href="url" name="Descriptive name suitable for link naming" /> > </dataElements> > </dataSet> > > <dataElement id="1" href="url"> > <... lots of info... /> > <dataSets> > <dataSet id="1" href="url" name="Descriptive name suitable for link naming" /> > </dataSets> > </dataElement> > > So basically I would want to > - use links as references > - it seems to be a good idea to include a minimal description of the resource for more human readable contexts (think javascript with a json-variant of this xml) > - add two way referencing > > Does this make sense? > > I am thinking this makes sense, and I should be able to append the schema with optional definitions of the added structure. It would break an existing parser validating against the old schema, but that should be possible to solve using some clever switching in the internal implementation that could drop these extra elements (and namespace declaration) when talking to legacy infrastructure. > > But, the xml schema I would come up with would never the less not be the same as the common standard that is shared knowledge. For unknowing clients I could try to add mechanisms for "negotiating down" to the standard xml. But it could be difficult to know this for "unknown"/loosely coupled peer systems such that in the end the "standard" xml will end up being used most of the time. It would seem that the potential "serendipitous" benefits of adding hypermedia and two-way linking would quickly end up being a relatively high cost with little benefit. Would it be better to just use the xml as defined in the standard and not aim for as much restfulness? > > Is it too granular? > > I know this is really impossible to evaluate generally, but since all existing infrastructure are working on larger chunks of xml, is there even a point in splitting the domain into the "logical" resources in the web api. Might I just as well just move the existing "import/export" commands to the web as course grained rpc style methods and save myself the trouble of trying to combine a restful web api with legacy infrastructure that I won't get rid of anyway? > > Or should I in the end just avoid trying to merge the two "apis" and let them work on separate optimized models, taking the double cost? > > I guess what I am asking, are there patterns to this kind of challenge that can shed some light on how to approach it? It would seem to be a kind of challenge others should have met as well :) > > Jo
Mike Kelly wrote: > > Some would insist you *MUST* register the identifiers at the > appropriate registries and publish your specs through the relevant > standards bodies, because otherwise you will be Doing It Wrong, the > scale of the internet will crush you, and your head will explode.. > Ad-hominem insinuations against those who teach about architectural constraints you fail to understand, helps others learn REST, how? If the value of your Content-Type header isn't registered, and you're sending it over the Internet, then you aren't using any media type. Meeting the self-descriptive messaging constraint begins with using media types. It actually says so right there in the thesis. You're free to disagree with the thesis, but it would probably help those trying to learn REST if you'd teach them what the thesis says, instead of teaching your disagreement with it, unless you can phrase that disagreement in the language of software architecture rather than hyperbole. A media type is metadata, not the data format itself. Self-descriptive messaging (on the Web) requires IANA registration, because no value that isn't in the registry meets the definition of a media type, and no other registry exists. Achieving the desirable properties of the REST style depends upon standardization of data types, but only using media types (meaning, values appearing in the registry) is a constraint. Customizing your own data types which nobody will ever adopt, when ubiquitous data types exist which will work well enough, is a REST anti-pattern. But, this sounds like a case where a new *standardized* data type is exactly what we're talking about. So what on Earth is the rationale behind your argument against registering a media type for it? Roy's thesis explains the rationale behind the constraint in question, which seems like pragmatic goals for Web-based systems and mentions nothing about heads exploding; so I would expect your rebuttal to explain how portability is achieved in a distributed hypertext style based on bespoke data types, backed up by reference to known architectural styles which exhibit portability _without_ "constrain[ing] the data elements to a set of standardized formats." -Eric
On Wed, Feb 23, 2011 at 6:09 AM, Eric J. Bowman <eric@...> wrote: > Mike Kelly wrote: >> >> Some would insist you *MUST* register the identifiers at the >> appropriate registries and publish your specs through the relevant >> standards bodies, because otherwise you will be Doing It Wrong, the >> scale of the internet will crush you, and your head will explode.. >> > > Ad-hominem insinuations against those who teach about architectural > constraints you fail to understand, helps others learn REST, how? Yes, you got me. These cunning insinuations were designed to deliberately and sinisterly mislead the poor helpless reader. I apologise profusely to everyone involved. Cheers, Mike p.s. I know it's kind of your thing to derail/kill threads here, but since you've already admitted that you "don't even know what ESI stands for" - maybe you could sit this one out?
--- In rest-discuss@yahoogroups.com, Stuart Charlton <stuartcharlton@...> wrote: > > gah, a typo > > "Steps 1 - 3 are "design time", steps 4-5 are "runtime" in the traditional > sense > of the term;" > > should read > > "Steps 1 - 4 are "design time", steps 5-6 are "runtime" in the traditional > sense of the term;" > > ________________________________ > From: Stuart Charlton <stuartcharlton@...> > To: rest-discuss@yahoogroups.com > Sent: Tue, February 22, 2011 10:03:47 PM > Subject: [rest-discuss] design vs runtime in representations > <snip> > My question: > > How is this not RESTful? (in the sense of it not being respectful of the > constraints of the style and thus sacrificing some beneficial properties like > scalability, evolvability, etc.) > > My proposition: > > 1. If this is not RESTful, we'll have to talk. > > 2. If this is RESTful, we need to be really careful when we say there's no > "design time" in RESTful systems. > > I think the clearer statement is that "static code generation" (e.g. client > stubs) is discouraged in properly RESTful systems, since it tends towards > sucktacular rigidity in the user agent. > > Cheers > Stu > Stu, The amount of runtime flexibility depends on the hypermedia controls that are available. For example, if there were no links or forms in HTML, then the browser (or the user) would be bound to the interface specifics of every service out there. A hypermedia control is only partially a data construct. A control is instantiated as the result of client processing of the data. If a client does not (or cannot) interpret <form> as a control then you lose the runtime flexibility that that control provides. A developer can make up for that by interpreting the control themselves and coding the knowledge in the client, but then as you say, they are turning a runtime binding into a design-time binding. HTML and it's controls are primarily designed for browsers presenting information to humans. It has some features like <link> and rel that target alternate, machine-driven clients but the language is not powerful enough to provide equivalent machine controls for every human control expressible in the language. I believe that it is possible to build machine controls that provide much more run time flexibility than is afforded machines by HTML. I always point to CCXML as an example of this. I suspect, however, that the more powerful machine controls are, the more they must target specific types of machines. I think the whole area needs more investigation though. In short, I don't think it's a question of RESTful vs. not RESTful. I think you are simply noting that the degree of runtime flexibility afforded to the system is a function of both the hypermedia format *and* the client. Regards, Andrew
On Wed, Feb 23, 2011 at 5:03 AM, Stuart Charlton <stuartcharlton@...> wrote: > > 2. If this is RESTful, we need to be really careful when we say there's no > "design time" in RESTful systems. > Do you mean RESTful applications consumed by automated clients? If so, +1 I think it's a key consideration when designing media types, specifically their hypermedia controls, that are aimed at these sorts of automated clients. Media types that over-provision mechanisms for run-time dynamism may give servers false confidence in terms of what sorts of changes they can enact. Cheers, Mike
Comments inline --- On Wed, 2/23/11, Mike Kelly <mike@...> wrote: > > 2. If this is RESTful, we need to be really careful > when we say there's no > > "design time" in RESTful systems. > > > > Do you mean RESTful applications consumed by automated > clients? If so, +1 Yes that's what I meant. > I think it's a key consideration when designing media > types, > specifically their hypermedia controls, that are aimed at > these sorts > of automated clients. Media types that over-provision > mechanisms for > run-time dynamism may give servers false confidence in > terms of what > sorts of changes they can enact. Right; the media type itself needs to provide guidance on expectations of how it will be used. For example, I think media types for automated consumption really should refer to something like RFC 5829 for dealing with versioning, with a clear delineation of elements that can/should change & be dealt with at runtime, vs. elements that may require more lead time and thus a "successor-version" link would help some user agents out. Stu
Stu: the idea of using RFC 5829 is interesting. on a similar track, iv'e been toying w/ using the RFC 2119 words (MUST, MAY, etc.) as a way to decorate hypermedia elements in a "profile" that is consumable by agents. in theory (LOL) agents could compare their own "design-time profile" (indicating what that agent currently supports) with the "run-time profile" provided in the response (@profile in the document, profile param in media type, Link header, etc.). Mismatches could be handled by the agent as "stop", "warn", "ignore", etc. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Wed, Feb 23, 2011 at 15:59, Stuart Charlton <stuartcharlton@yahoo.com> wrote: > > Comments inline > > --- On Wed, 2/23/11, Mike Kelly <mike@mykanjo.co.uk> wrote: > >> > 2. If this is RESTful, we need to be really careful >> when we say there's no >> > "design time" in RESTful systems. >> > >> >> Do you mean RESTful applications consumed by automated >> clients? If so, +1 > > Yes that's what I meant. > >> I think it's a key consideration when designing media >> types, >> specifically their hypermedia controls, that are aimed at >> these sorts >> of automated clients. Media types that over-provision >> mechanisms for >> run-time dynamism may give servers false confidence in >> terms of what >> sorts of changes they can enact. > > Right; the media type itself needs to provide guidance on expectations of how it will be used. > > For example, I think media types for automated consumption really should refer to something like RFC 5829 for dealing with versioning, with a clear delineation of elements that can/should change & be dealt with at runtime, vs. elements that may require more lead time and thus a "successor-version" link would help some user agents out. > > Stu > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
comments inline --- On Wed, 2/23/11, wahbedahbe <andrew.wahbe@...> wrote: > The amount of runtime flexibility depends on the hypermedia controls that are available. For example, if there > were no links or forms in HTML, then the browser (or the user) would be bound to the interface specifics of > every service out there. Yes, I agree. > A developer can make up for that by interpreting the control themselves and coding the knowledge in the > client, but then as you say, they are turning a runtime binding into a design-time binding. Well, the question is, should domain-specific data be tightly coupled to a generic media type or not. Let's assume a generic media type, like a <form>. A representation may have an <input name="quoteSymbol"> that a user agent could bind against to its application. However, we're always going to have disagreement in data: between representations from different origin servers, between the representation and the user agent's application - not everyone will use the same data definitions, symbols, etc. The best we can hope for is standardized bits & pieces of highly common data, perhaps some highly common industry consortium data (e.g. ISO 20022 or EDI), and a straightforward way to handle the rest of the differences. This straightforward way of handling differences tends to imply a combination of: 1. an explicit data model in the media type, 2. data transformation advice (e.g. linking to an XSLT), 3. deeper semantic description from another representation (e.g. SKOS, OWL, etc.), 4. or a user has to manipulate the representation to conform to client application expectations. The latter case we might call a mashup developer today. > I believe that it is possible to build machine controls that provide much more run time flexibility than is > afforded machines by HTML. I always point to CCXML as an example of this. I suspect, however, that the more > powerful machine controls are, the more they must target specific types of machines. I think the whole area > needs more investigation though. SCXML (and CCXML by extension), has an interesting take on this and may have a practical solution for future media types, though I wish SCXML had richer support for HTTP interactions. It partly gets around the data problem by describing an explicit data model and mappings to ECMAscript & XPath. But even still, if we had an application that was concerned with integration, we would need some kind of configuration in the user agent to bridge the gap between the data (and events / state transitions!) in the representation and my internal application data. This configuration would occur AFTER we had retrieved the SCXML document. We could build some clever semantic extensions to ease this burden, but practically speaking, it can only go so far. This is why I brought up the need for a design time / run time separation between retrieving the representation describing an unsafe operation (e.g. a stock quote order form), or a process (e.g. an SCXML to order stocks), and the actual endpoints that will be repeatedly invoked at runtime (e.g. a live stock quote). We can't get completely away from the divergence of semantics in data & actions, though I bet we could close the gap from today's situation by quite a lot. Cheers Stu
comments inline --- On Wed, 2/23/11, mike amundsen <mamund@...> wrote: > the idea of using RFC 5829 is interesting. Yeah, I mean why not, right? I know it was designed for content management, but representations that include forms really could use versioning like this too, so that rigid user agents can have time to adapt. They could bind against a specific version, whereas a more adaptive user agent could always bind against latest-version. > on a similar track, iv'e been toying w/ using the RFC 2119 > words > (MUST, MAY, etc.) as a way to decorate hypermedia elements > in a > "profile" that is consumable by agents. in theory (LOL) > agents could > compare their own "design-time profile" (indicating what > that agent > currently supports) with the "run-time profile" provided in > the > response (@profile in the document, profile param in media > type, Link > header, etc.). Mismatches could be handled by the > agent as "stop", > "warn", "ignore", etc. Cool, reminds me of Telnet option negotiation: http://www.faqs.org/rfcs/rfc1143.html Cheers Stu
> Cool, reminds me of Telnet option negotiation: http://www.faqs.org/rfcs/rfc1143.html ha! forgot about those. thanks. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Wed, Feb 23, 2011 at 21:17, Stuart Charlton <stuartcharlton@...> wrote: > comments inline > > --- On Wed, 2/23/11, mike amundsen <mamund@...> wrote: > >> the idea of using RFC 5829 is interesting. > > Yeah, I mean why not, right? I know it was designed for content management, but representations that include forms really could use versioning like this too, so that rigid user agents can have time to adapt. They could bind against a specific version, whereas a more adaptive user agent could always bind against latest-version. > >> on a similar track, iv'e been toying w/ using the RFC 2119 >> words >> (MUST, MAY, etc.) as a way to decorate hypermedia elements >> in a >> "profile" that is consumable by agents. in theory (LOL) >> agents could >> compare their own "design-time profile" (indicating what >> that agent >> currently supports) with the "run-time profile" provided in >> the >> response (@profile in the document, profile param in media >> type, Link >> header, etc.). Mismatches could be handled by the >> agent as "stop", >> "warn", "ignore", etc. > > > Cool, reminds me of Telnet option negotiation: http://www.faqs.org/rfcs/rfc1143.html > > Cheers > Stu > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
2011/2/22 Eric J. Bowman <eric@...> > > > > > > "It's the Architecture, Stupid!" > > > > Any ideas on what to call the opposite of this award? I have a first > recipient in mind; I'll be spending some time this weekend checking out > their architecture and highlighting its RESTful points... > > "Al Jazeera reported Web traffic to its site increased by 2,500 percent > between Jan. 28 and Jan. 31, much of it from the United States." > > http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2011/02/20/INHD1HO6NG.DTL > > ...because, obviously, their architecture exhibits certain desirable > properties we're targeting as REST developers. All the more impressive, > How d'you know that? :) Just to know *what* they are implementing > considering those new visitors were mostly after video content. Is > there even a term for something an order of magnitude greater than a > mere slashdotting, in both scope and duration? > > I'd rather teach REST through positive reinforcement, by highlighting > the rare site which doesn't collapse in a heap of smoking ruins when > subjected to such massive, sustained traffic increases. I wish the > linked article had cited a reference. > A really good point. +1 > > -Eric > > -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
Hi all, I've just released a new Web API framework and I was hoping to solicit some constructive feedback on the project. The full documentation is available here http://django-rest-framework.org <http://django-rest-framework.org/> , but as a quick overview... * The default HTML media type emitter renders a really nice Django admin-styled API. Eg: http://api.django-rest-framework.org <http://api.django-rest-framework.org/> * Encourages good REST-over-HTTP design - you'd have to really try hard to build an API with the framework that wasn't self-describing and well-connected.* Architecturally I think it's a pretty solid effort - it tries to go with the Django grain wherever possible. (Although I guess that aspect might be outside the scope of this particular forum.) The browse-able API is definitely the aspect I'm most pleased with, it supports full GET/PUT/POST/DELETE on resources, and works nicely both for generic resources, and for resources that are tied to forms and/or models. For example: http://api.django-rest-framework.org/object-store/ <http://api.django-rest-framework.org/object-store/> (Generic resource)http://api.django-rest-framework.org/pygments/ <http://api.django-rest-framework.org/pygments/> (Resource tied to a Django form)http://api.django-rest-framework.org/blog-post/ <http://api.django-rest-framework.org/blog-post/> (Resource tied to a Django model) The 0.1 release went out last week, so obviously I still have lots of work I'd like to do. HEAD/OPTIONS/TRACE/PATCH, work on authentication & permissions, encouraging hyper-linked resources over nested resource representations, good caching support and a stack of others all spring to mind. Some areas I'd appreciate input on would be... * What do you think I've gotten right or wrong in the design?* What are your thoughts on the admin-style browse-able API, which other frameworks do something similar and how does it compare?* What do you think I should consider as priorities in further releases?* How can I further encourage good REST-over-HTTP design with the framework? Any and all thoughts would be most welcome! Regards, Tom
Any reason the response includes an Allow: GET and Vary for Allow? Is the server varying the representation by Allow? On Mar 2, 2011, at 4:18 AM, tomchristie0 wrote: > > > Hi all, > > I've just released a new Web API framework and I was hoping to solicit some constructive feedback on the project. > > The full documentation is available here http://django-rest-framework.org, but as a quick overview... > > * The default HTML media type emitter renders a really nice Django admin-styled API. Eg: http://api.django-rest-framework.org > * Encourages good REST-over-HTTP design - you'd have to really try hard to build an API with the framework that wasn't self-describing and well-connected. > * Architecturally I think it's a pretty solid effort - it tries to go with the Django grain wherever possible. (Although I guess that aspect might be outside the scope of this particular forum.) > > The browse-able API is definitely the aspect I'm most pleased with, it supports full GET/PUT/POST/DELETE on resources, and works nicely both for generic resources, and for resources that are tied to forms and/or models. For example: > > http://api.django-rest-framework.org/object-store/ (Generic resource) > http://api.django-rest-framework.org/pygments/ (Resource tied to a Django form) > http://api.django-rest-framework.org/blog-post/ (Resource tied to a Django model) > > The 0.1 release went out last week, so obviously I still have lots of work I'd like to do. HEAD/OPTIONS/TRACE/PATCH, work on authentication & permissions, encouraging hyper-linked resources over nested resource representations, good caching support and a stack of others all spring to mind. > > Some areas I'd appreciate input on would be... > > * What do you think I've gotten right or wrong in the design? > * What are your thoughts on the admin-style browse-able API, which other frameworks do something similar and how does it compare? > * What do you think I should consider as priorities in further releases? > * How can I further encourage good REST-over-HTTP design with the framework? > > Any and all thoughts would be most welcome! > > Regards, > > Tom > > >
Thanks for pointing that out. It's meant to be an Accept in the Vary header, not Allow. I've fixed the (glaring) bug now, and it'll get pushed to the example server and rolled into a fixed release in due course. On 3 March 2011 05:45, Subbu Allamaraju <subbu@...> wrote: > Any reason the response includes an Allow: GET and Vary for Allow? Is the > server varying the representation by Allow? > > On Mar 2, 2011, at 4:18 AM, tomchristie0 wrote: > > > > > > > Hi all, > > > > I've just released a new Web API framework and I was hoping to solicit > some constructive feedback on the project. > > > > The full documentation is available here > http://django-rest-framework.org, but as a quick overview... > > > > * The default HTML media type emitter renders a really nice Django > admin-styled API. Eg: http://api.django-rest-framework.org > > * Encourages good REST-over-HTTP design - you'd have to really try hard > to build an API with the framework that wasn't self-describing and > well-connected. > > * Architecturally I think it's a pretty solid effort - it tries to go > with the Django grain wherever possible. (Although I guess that aspect > might be outside the scope of this particular forum.) > > > > The browse-able API is definitely the aspect I'm most pleased with, it > supports full GET/PUT/POST/DELETE on resources, and works nicely both for > generic resources, and for resources that are tied to forms and/or models. > For example: > > > > http://api.django-rest-framework.org/object-store/ (Generic resource) > > http://api.django-rest-framework.org/pygments/ (Resource tied to a > Django form) > > http://api.django-rest-framework.org/blog-post/ (Resource tied to a > Django model) > > > > The 0.1 release went out last week, so obviously I still have lots of > work I'd like to do. HEAD/OPTIONS/TRACE/PATCH, work on authentication & > permissions, encouraging hyper-linked resources over nested resource > representations, good caching support and a stack of others all spring to > mind. > > > > Some areas I'd appreciate input on would be... > > > > * What do you think I've gotten right or wrong in the design? > > * What are your thoughts on the admin-style browse-able API, which other > frameworks do something similar and how does it compare? > > * What do you think I should consider as priorities in further releases? > > * How can I further encourage good REST-over-HTTP design with the > framework? > > > > Any and all thoughts would be most welcome! > > > > Regards, > > > > Tom > > > > > > > >
[ Attachment content not displayed ]
[ Attachment content not displayed ]
Hi, Validating forms submitted by users and reporting back validations errors is a common requirement in HTTP web applications. POST-REDIRECT-GET is a commonly used pattern for this. But I recently saw an interesting piece of code, which forced me to think again about the basics. The code I am looking at, is actually overriding HttpServletRequest and oerriding 'getMethod' to change HTTP verb from POST to GET. then doing request forwarding. So for the forwarded resource its a GET request, even if from browser, its was a POST request. I see two problems with this, 1. The response is still given to user agent as part of POST request. 2. For web container, even if getMethod is overridden, its a POST request. (I do not know, if in the forwarded requests, web containers strictly use the request objects thats wrapped and do not use any of the internal information for that request) The advantage of this method, is that you do not need to persist information between redirects, So you do not need session or other persistence mechanism. Anyone else has seem this kind of idiom used in J2EE web applications? Thanks, Unmesh
None of this has any bearing on what is seen at the protocol level. At the protocol level, a POST is a POST. From your description it seems that this idiom is based on an incorrect understanding of the protocol. Subbu On Mar 7, 2011, at 12:07 AM, Unmesh Joshi wrote: > Hi, > > Validating forms submitted by users and reporting back validations > errors is a common requirement in HTTP web applications. > POST-REDIRECT-GET is a commonly used pattern for this. But I recently > saw an interesting piece of code, which forced me to think again about > the basics. > > The code I am looking at, is actually overriding HttpServletRequest > and oerriding 'getMethod' to change HTTP verb from POST to GET. then > doing request forwarding. > So for the forwarded resource its a GET request, even if from browser, > its was a POST request. > I see two problems with this, > 1. The response is still given to user agent as part of POST request. > 2. For web container, even if getMethod is overridden, its a POST > request. (I do not know, if in the forwarded requests, web containers > strictly use the request objects thats wrapped and do not use any of > the internal information for that request) > > The advantage of this method, is that you do not need to persist > information between redirects, So you do not need session or other > persistence mechanism. > > Anyone else has seem this kind of idiom used in J2EE web applications? > > Thanks, > Unmesh > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Mar 7, 2011, at 4:07 PM, Subbu Allamaraju wrote: > On Mar 7, 2011, at 12:07 AM, Unmesh Joshi wrote: >> Validating forms submitted by users and reporting back validations >> errors is a common requirement in HTTP web applications. >> POST-REDIRECT-GET is a commonly used pattern for this. But I recently >> saw an interesting piece of code, which forced me to think again about >> the basics. >> >> The code I am looking at, is actually overriding HttpServletRequest >> and oerriding 'getMethod' to change HTTP verb from POST to GET. then >> doing request forwarding. >> So for the forwarded resource its a GET request, even if from browser, >> its was a POST request. >> I see two problems with this, >> 1. The response is still given to user agent as part of POST request. >> 2. For web container, even if getMethod is overridden, its a POST >> request. (I do not know, if in the forwarded requests, web containers >> strictly use the request objects thats wrapped and do not use any of >> the internal information for that request) >> >> The advantage of this method, is that you do not need to persist >> information between redirects, So you do not need session or other >> persistence mechanism. >> >> Anyone else has seem this kind of idiom used in J2EE web applications? > None of this has any bearing on what is seen at the protocol level. At the protocol level, a POST is a POST. > > From your description it seems that this idiom is based on an incorrect understanding of the protocol. I don't see how that is responsive to the question. The server is changing a POST request inside the service handler to be a GET request so that the handler can do some funky chicken dance that has specific behavior within the current JDK. This is essentially the same thing that Apache httpd does when an internal content handler needs to perform an internal redirect to obtain some part of the content, such as with server-side includes being embedded in the POST response. REST plays no part in this because it is all behind the resource interface provided by the servlet engine. There are no constraints on POST, so changing the method to a safer one (like GET) is certainly not going to violate any of the client's expectations regarding their POST. It may be weird. It may be unreliable over time. But it is not inherently for or against RESTful interaction with the client. The reason for doing content handling in this manner is usually so that the internal request contains the same authentication information as the original request, thereby avoiding some security issues during request handling. ....Roy
On Mar 7, 2011, at 7:00 PM, Roy T. Fielding wrote: > I don't see how that is responsive to the question. The server is > changing a POST request inside the service handler to be a GET > request so that the handler can do some funky chicken dance that > has specific behavior within the current JDK. > > This is essentially the same thing that Apache httpd does when > an internal content handler needs to perform an internal redirect > to obtain some part of the content, such as with server-side > includes being embedded in the POST response. > Yes. However, as PRG is a protocol level pattern, my intent was to say that internal redirects/forwards won't help avoid state transfer through redirect. I should've been more clear. Subbu
Hi all,
I was wondering what's the best practice for telling a client he
should send an If-Match header within its requests.
If it GETs resource A, with an Etag, and then tries to update it with
a PUT without including the If-Match with the Etag, I would respond
with a 409 status code and a directive ("You must include an If-Match
header in PUT requests") in the response's body.
Is that ok?
Thanks to everyone,
--
Nadalin Alessandro
www.odino.org
www.twitter.com/_odino_
I find 403 more appropriate when the precondition is missing, and 412 when the precondition does not match.
<promo>
Pl see http://my.safaribooksonline.com/book/web-development/web-services/9780596809140/conditional-requests/recipe-how-to-implement-conditional-put for some examples.
</promo>
Subbu
On Mar 15, 2011, at 3:08 PM, Alessandro Nadalin wrote:
> Hi all,
>
> I was wondering what's the best practice for telling a client he
> should send an If-Match header within its requests.
>
> If it GETs resource A, with an Etag, and then tries to update it with
> a PUT without including the If-Match with the Etag, I would respond
> with a 409 status code and a directive ("You must include an If-Match
> header in PUT requests") in the response's body.
>
> Is that ok?
>
> Thanks to everyone,
>
> --
> Nadalin Alessandro
> www.odino.org
> www.twitter.com/_odino_
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
Hi Subbu,
2011/3/16 Subbu Allamaraju <subbu@...>:
> I find 403 more appropriate when the precondition is missing, and 412 when the precondition does not match.
good points. So when should we use the 409 code? I thought that the
only case when we should use it was when trying to update a resource
out of date.
>
> <promo>
> Pl see http://my.safaribooksonline.com/book/web-development/web-services/9780596809140/conditional-requests/recipe-how-to-implement-conditional-put for some examples.
> </promo>
;-)
thanks,
>
> Subbu
>
> On Mar 15, 2011, at 3:08 PM, Alessandro Nadalin wrote:
>
>> Hi all,
>>
>> I was wondering what's the best practice for telling a client he
>> should send an If-Match header within its requests.
>>
>> If it GETs resource A, with an Etag, and then tries to update it with
>> a PUT without including the If-Match with the Etag, I would respond
>> with a 409 status code and a directive ("You must include an If-Match
>> header in PUT requests") in the response's body.
>>
>> Is that ok?
>>
>> Thanks to everyone,
>>
>> --
>> Nadalin Alessandro
>> www.odino.org
>> www.twitter.com/_odino_
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>
>
--
Nadalin Alessandro
www.odino.org
www.twitter.com/_odino_
A 409 may be useful for exposing implementation-specific contention issues that wouldn't be apparent at the protocol level. For example, you may encounter a failed DB transaction due to excessive contention where the resource's Last-Modified or Etag values didn't change; in this case, 412 (Precondition failed) would not be appropriate.
Jon
........
Jon Moore
Comcast Interactive Media
From: Alessandro Nadalin <alessandro.nadalin@...<mailto:alessandro.nadalin@...>>
Date: Wed, 16 Mar 2011 17:37:01 +0100
To: Subbu Allamaraju <subbu@...<mailto:subbu@...>>
Cc: <rest-discuss@yahoogroups.com<mailto:rest-discuss@yahoogroups.com>>
Subject: Re: [rest-discuss] The proper status code for conflicts
Hi Subbu,
2011/3/16 Subbu Allamaraju <subbu@...<mailto:subbu%40subbu.org>>:
> I find 403 more appropriate when the precondition is missing, and 412 when the precondition does not match.
good points. So when should we use the 409 code? I thought that the
only case when we should use it was when trying to update a resource
out of date.
>
> <promo>
> Pl see http://my.safaribooksonline.com/book/web-development/web-services/9780596809140/conditional-requests/recipe-how-to-implement-conditional-put for some examples.
> </promo>
;-)
thanks,
>
> Subbu
>
> On Mar 15, 2011, at 3:08 PM, Alessandro Nadalin wrote:
>
>> Hi all,
>>
>> I was wondering what's the best practice for telling a client he
>> should send an If-Match header within its requests.
>>
>> If it GETs resource A, with an Etag, and then tries to update it with
>> a PUT without including the If-Match with the Etag, I would respond
>> with a 409 status code and a directive ("You must include an If-Match
>> header in PUT requests") in the response's body.
>>
>> Is that ok?
>>
>> Thanks to everyone,
>>
>> --
>> Nadalin Alessandro
>> www.odino.org
>> www.twitter.com/_odino_
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>
>
--
Nadalin Alessandro
www.odino.org
www.twitter.com/_odino_
How do you distinguish between a client that forgot to include if-match from one that intentionally left it off? That's rhetorical, BTW; you shouldn't. Both messages are perfectly valid HTTP. If the client screwed up and sent the wrong one, that's its own fault. Mark.
On Wed, Mar 16, 2011 at 1:33 PM, Mark Baker <distobj@...> wrote: > > > > How do you distinguish between a client that forgot to include if-match from one that intentionally left it off? > > That's rhetorical, BTW; you shouldn't. Both messages are perfectly valid HTTP. If the client screwed up and sent the wrong one, that's its own fault. > > Mark. He doesn't have to distinguish. All of HTTP doesn't necessarily apply to all resources. So, if this operation requires If-Match, it doesn't much matter whether the intent was innocent or purposeful -- it's invalid and he wants to reject it. Regards, Will Hartung (willh@...)
Hi guys, Here's a small design issues I'm not sure how to resolve. Whenever a problem happen when resolving a URI on the server, if that problem happens before the entity body starts being written back to the client, no problem, the server can generate an error in whichever way is acceptable for the client. In the case where an entity is already being written down, with a 200 already sent, how do you notify the client that you had an error while writing it? I could inject the content of the error in the body, but then my response will be considered authoritative. Should I just close the connection abruptly instead? Or something I've not thought of?
There is not anything that the server can do since the response line is already on the wire. Client has to be smart enough to detect the failure while it is parsing the body. For instance, a DOM based parser for an XML representation might fail where as a stream based parser may be able to make sense partly. Think of other cases when a server said it is going to send 'n' bytes for the body, but could only send 'n-x', or the application code failed to trigger an end for the response which may lead to the last chunk (the zero sized chunk) missing from the response. In these cases, in stead of trying to put some error indicator in the body (may not be possible in some media types), it is better to trigger a connection:close and drop the connection on the server side so that the client can retry on a fresh connection - provided the method is idempotent. Subbu On Mar 17, 2011, at 1:12 PM, Sebastien Lambla wrote: > > > Hi guys, > > Here’s a small design issues I’m not sure how to resolve. Whenever a problem happen when resolving a URI on the server, if that problem happens before the entity body starts being written back to the client, no problem, the server can generate an error in whichever way is acceptable for the client. In the case where an entity is already being written down, with a 200 already sent, how do you notify the client that you had an error while writing it? > > I could inject the content of the error in the body, but then my response will be considered authoritative. Should I just close the connection abruptly instead? Or something I’ve not thought of? > > >
Hi, Had a couple of quick doubts regarding REST API design. Consider a micro-blogging application like twitter where there are two users A and B. A wants to follow B. I was wondering what is the right way to design it? 1. POST /B/followers (Send A's userid in the body) 2. PUT /B/followers/A 3. POST /A/friends (Send B's userid in the body) 4. PUT /A/friends/B 5. POST /relationships (Send A and B's userid in the body) Q1. Are there any guidelines when something like a relation can be elevated to be a separate resource by itself? Q2. I also learnt that URI's should be completely opaque from the client's perspective. So does this even matter when we have proper media type and link relations? Thanks, Viswanath
> Q1. Are there any guidelines when something like a relation can be > elevated to be a separate resource by itself? I know of no guidelines in this area. The REST style does not address this directly. Subbu Allamaraju's book "RESTful Web Services Cookbook" has some handy recipes for identifying resources in a design [1] > Q2. I also learnt that URI's should be completely opaque from the > client's perspective. So does this even matter when we have proper > media type and link relations? Yes, URIs (from the client perspective) can be treated as opaque. Usually that means clients need some other way of identifying URIs and the @rel is a good candidate for this. A while back, Roy Fielding wrote a blog post that sets out some of his "rules" for Web APIs[2]. I might help you approach your own design. Finally, I recently started a experiement to define a micro-blog implementation that attempts to follow Fielding's suggested rules[3]. This implementation has no URIs, not resource names, etc. This may also give you some ideas. [1] http://www.restful-webservices-cookbook.org/table-of-contents/ [2] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven [3] http://amundsen.com/hypermedia/profiles/ mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Fri, Mar 18, 2011 at 05:38, Viswanath Durbha <viswanath.durbha@...> wrote: > Hi, > > Had a couple of quick doubts regarding REST API design. Consider a > micro-blogging application like twitter where there are two users A > and B. A wants to follow B. I was wondering what is the right way to > design it? > > 1. POST /B/followers (Send A's userid in the body) > 2. PUT /B/followers/A > 3. POST /A/friends (Send B's userid in the body) > 4. PUT /A/friends/B > 5. POST /relationships (Send A and B's userid in the body) > > > Thanks, > Viswanath > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Wed, Mar 16, 2011 at 3:40 PM, Will Hartung <willh@...> wrote: > On Wed, Mar 16, 2011 at 1:33 PM, Mark Baker <distobj@...> wrote: >> >> >> >> How do you distinguish between a client that forgot to include if-match from one that intentionally left it off? >> >> That's rhetorical, BTW; you shouldn't. Both messages are perfectly valid HTTP. If the client screwed up and sent the wrong one, that's its own fault. >> >> Mark. > > He doesn't have to distinguish. All of HTTP doesn't necessarily apply > to all resources. So, if this operation requires If-Match, it doesn't > much matter whether the intent was innocent or purposeful -- it's > invalid and he wants to reject it. It seems you missed his 2nd paragraph. That said, I personally consider it bad practice to reject unconditional requests. Mark.
On Sat, Mar 19, 2011 at 3:10 AM, Mark Baker <distobj@...> wrote: > > That said, I personally consider it bad practice to reject > unconditional requests. > > Note the ambiguous use of the word "unconditional". I would call "If-Match:*" an unconditional request too ;-) I disagree, but I see your point. If the server exposed an ETag in a modifiable resource, I would consider it bad form if the client doesn't keep track of the ETag and echo it back together with the request entity. The absence of an If-Match indicates that the client isn't keeping track of ETags and probably doesn't know what If-Match is. The presence of an If-Match indicates that the client _is_ keeping track of ETags, and would therefore send an If-Match:* to get the same effect. Hmm. You changed my mind, I now agree with you and now I have to go and fix some servers :-/ -- -mogsie-
I'm currently building a little REST interface to Twitter, so this is interesting to me.
> Had a couple of quick doubts regarding REST API design. Consider a
> micro-blogging application like twitter where there are two users A
> and B. A wants to follow B. I was wondering what is the right way to
> design it?
>
> 1. POST /B/followers (Send A's userid in the body)
> 2. PUT /B/followers/A
> 3. POST /A/friends (Send B's userid in the body)
> 4. PUT /A/friends/B
> 5. POST /relationships (Send A and B's userid in the body)
>
Presumably the situation is that a Twitter client, run by A, wants to tell a Twitter "service/API" to record A's desire to follow B. So this client has 'authority' over A stuff, but not anything else; not anything about B.
So it looks like "3. POST /A/friends (Send B's userid in the body)" is the closest to what you need. The A-driven client expresses an intention to create a new friendship with B, via a resource it has some authority over, at least logically. POST is most often used for create in "REST APIs". You don't PUT to a URL you made up yourself, as the server manages all that, and what data are you PUTting? "true"? This isn't a real resource - it's artificial! Finally, /relationships seems over-engineered or too vague to be implemented cleanly.
> Q2. I also learnt that URI's should be completely opaque from the
> client's perspective.
Yes: the client knows just that the URL '/A/friends' is its own collection of friends - it doesn't construct this URL, it GETted it from the server at some point, perhaps when cacheing A-related structures. Here's a hugely sketchy interaction:
GET /A
{ .. friends: "/A/friends", ..}
GET /A/friends
[ /M, /F, .. ]
POST /A/friends
/B
GET /A/friends
[ /B, /M, /F, .. ]
Notice that the /A/friends list is a list of actual (opaque!) URLs representing those users, and we POSTed B's URL, not B's userid - we presumably fetched /B before deciding they were worthy of our friendship.
This is better than "PUT /A/friends [ /B, /M, /F, .. ]", as it allows you later to parameterise your friendships - if Twitter supports that:
POST /A/friends
{ user: /B, relationship: brother, rating: 5, last-chat: Mon 15th, .. }
Created
Content-Location: /A/B-friendship
GET /A/friends
[ /A/B-friendship, /A/M-friendship, /A/F-friendship, .. ]
PUT /A/B-friendship
:
etc.
Cheers!
Duncan
So, what if you want to un-friend 'B' in the "POST /A/friends /B" approach? You first need to understand the POST body as a declaration of your intention - here's a slight adjustment to clarify that: GET /A/friends [ /M, /F, .. ] POST /A/friends friend /B GET /A/friends [ /B, /M, /F, .. ] Note the change to "friend /B". Note also that this is declarative and idempotent, where the imperative "add /B" approach could potentially cause repeated /Bs if the POST were repeated. You can now say: POST /A/friends unfriend /B to declare that you intend B to be missing now from your friends list. These examples are sketches, and the Media Type for the POST body should of course be widely-recognised, which means the form type, or JSON, or whatever your collaborators are happy with. I don't know of any standards that can be re-used to do 'friend' and 'unfriend' - you may have to drop down to less semantic data editing formats. Again, just "PUT /A/friends [ /M, /F, .. ]" would work, but that's a pattern that really only applies to smaller lists that you have control over, rather than large lists or lists that are changing a lot. Or do the alternative I suggested, of a new resource parameterising your friendship, which can be DELETEd. Cheers! Duncan
Thanks for the thoughtful and detailed responses. I actually like the new resource that parameterizes the friendship. That way, as you said, I can just DELETE the resource and it will be idempotent as well. By the way, the twitter REST API you're trying to build will have to use the official "REST" API of twitter at the backend, right?It feels a little odd to create an api on top of another api for the same purpose as I'm also trying to do something like that. On Mon, Mar 21, 2011 at 11:52 PM, Duncan <rest-discuss@cilux.org> wrote: > > > > > So, what if you want to un-friend 'B' in the "POST /A/friends /B" approach? > > > You first need to understand the POST body as a declaration of your > intention - here's a slight adjustment to clarify that: > > > GET /A/friends > > [ /M, /F, .. ] > > POST /A/friends > > friend /B > > > GET /A/friends > > [ /B, /M, /F, .. ] > > Note the change to "friend /B". > > Note also that this is declarative and idempotent, where the imperative > "add /B" approach could potentially cause repeated /Bs if the POST were > repeated. > > You can now say: > > POST /A/friends > > unfriend /B > > to declare that you intend B to be missing now from your friends list. > > These examples are sketches, and the Media Type for the POST body should of > course be widely-recognised, which means the form type, or JSON, or whatever > your collaborators are happy with. I don't know of any standards that can be > re-used to do 'friend' and 'unfriend' - you may have to drop down to less > semantic data editing formats. > > Again, just "PUT /A/friends [ /M, /F, .. ]" would work, but that's a > pattern that really only applies to smaller lists that you have control > over, rather than large lists or lists that are changing a lot. > > Or do the alternative I suggested, of a new resource parameterising your > friendship, which can be DELETEd. > > Cheers! > > Duncan > > >
just a side note, you could always use activity streams too.. Viswanath Durbha wrote: > Thanks for the thoughtful and detailed responses. > > I actually like the new resource that parameterizes the friendship. That > way, as you said, I can just DELETE the resource and it will be idempotent > as well. > > By the way, the twitter REST API you're trying to build will have to use the > official "REST" API of twitter at the backend, right?It feels a little odd > to create an api on top of another api for the same purpose as I'm also > trying to do something like that. > > On Mon, Mar 21, 2011 at 11:52 PM, Duncan <rest-discuss@...> wrote: > >> >> >> >> So, what if you want to un-friend 'B' in the "POST /A/friends /B" approach? >> >> >> You first need to understand the POST body as a declaration of your >> intention - here's a slight adjustment to clarify that: >> >> >> GET /A/friends >> >> [ /M, /F, .. ] >> >> POST /A/friends >> >> friend /B >> >> >> GET /A/friends >> >> [ /B, /M, /F, .. ] >> >> Note the change to "friend /B". >> >> Note also that this is declarative and idempotent, where the imperative >> "add /B" approach could potentially cause repeated /Bs if the POST were >> repeated. >> >> You can now say: >> >> POST /A/friends >> >> unfriend /B >> >> to declare that you intend B to be missing now from your friends list. >> >> These examples are sketches, and the Media Type for the POST body should of >> course be widely-recognised, which means the form type, or JSON, or whatever >> your collaborators are happy with. I don't know of any standards that can be >> re-used to do 'friend' and 'unfriend' - you may have to drop down to less >> semantic data editing formats. >> >> Again, just "PUT /A/friends [ /M, /F, .. ]" would work, but that's a >> pattern that really only applies to smaller lists that you have control >> over, rather than large lists or lists that are changing a lot. >> >> Or do the alternative I suggested, of a new resource parameterising your >> friendship, which can be DELETEd. >> >> Cheers! >> >> Duncan >> >> >> >
Hi, i'm implementing some REST services with xml or json request/response. I would like to define the schemas of the services with WADL, but there is some problem because i don't know how i can define a json structure in WADL. No problem with xml format: mediaType: Indicates the media type of the representation. Media ranges (e.g. text/*) are acceptable and indicate that any media type in the specified range is supported. element: For XML-based representations, specifies the qualified name of the root elemen Somebody knows if i can define a JSON schema with WADL? or if there is an alternative to WADL to define both JSON and XML services? Thanks
Dear REST enthousiasts, I am a PhD student at Multimedia Lab where we develop multimedia analysis algorithms. One of our aims is to deploy them as web services, and of course, we want to do that in a RESTful way. That's why I'd love to have your opinion. Simple example: a service which takes a photograph as input, and outputs the detected faces. The input could be "http://example.org/images/people" (a JPEG image) and the output "one face in the rectangle 204,36,10,10 and another in the rectangle 38,56,12,12". A very cool way would of course be to just GET http://example.org/images/people/faces, which then returns an RDF list. And even HTTP Link headers to http://example.org/images/people/faces/1 and http://example.org/images/people/faces/2. Or maybe HTTP link headers to http://example.org/images/people#xywh=204,36,10,10 (see [1]) with a rel of "http://ontology.org/face". Unfortunately, we assume http://example.org does not know how to detect faces. So, there is this face detection service at http://other.com/. How to invoke it? GETting http://other.org/detectfaces/http://images/people does not seem al that nice, since that is totally not resource oriented. We could first POST the image to http://other.org/images and then access the face detection service by GETting http://other.org/images/34/faces. However, this involves two calls to other.org. Those where basically my ideas. Do you have any ideas that could shed a light on this? Thanks in advance! [1] http://www.w3.org/TR/media-frags/ Cheers, Ruben Verborgh -- Ghent University - IBBT Faculty of Engineering and Architecture Department of Electronics and Information Systems (ELIS) Multimedia Lab Gaston Crommenlaan 8 bus 201 B-9050 Ledeberg-Ghent Belgium t: +32 9 33 14959 f: +32 9 33 14896 t secr: +32 9 33 14911 e: ruben.verborgh@... URL: http://multimedialab.elis.ugent.be
On 04/08/2011 01:44 PM, ruben.verborgh wrote: > We could first POST the image to http://other.org/images and then access the face detection service by GETting > http://other.org/images/34/faces. > However, this involves two calls to other.org. This is a RESTful way to do it, assuming the post returns 201 with a link to the .../34/faces (IMHO). Marek
I've been pondering whether or not it is OK to use PUT for a partial update of a resource. When I look at the HTTP spec it says "The PUT method requests that the enclosed entity be stored under the supplied Request-URI." This seems to be a little loosy goosy to me, as the entity is a represenation and the Request-URI identifies a resource and so the thing I create (the resource) is logically different than one particular representation of it. To illustrate this, suppose my media type allows an optional create-timestamp field and my server supplies the value using its own clock if the resource does not already exist. I might want to allow PUT operations to do updates, but not allow them to include the create-timestamp. In fact, I might explicitly define in the media type that PUT requests will be rejected if they include the create-timestamp. I don't think this violates any requirement for PUT that I can find. Specifically, I don't think I see anything that requires that a if I put representation X to resource Y and follow it by a GET that I have to get back the same representation. In fact, I don't think anything requires that if I use PUT I have to support GET at all, let alone with the same media type, let alone with the same representation provided by PUT. All that I see required is that PUT is idempotent and that the enclosed entity be stored under the supplied Request-URI, which doesn't seem (to me) to imply that it must be all that is stored. I would think that each media type can define the semantics of PUT in terms of how resource state is affected. If the above create-timestamp is OK, then why not PUT for wholesale partial updates? If I create a car resource with an engine within its state and then I PUT a new engine why can I not expect to store this engine under the original car resource, so that a subsequent GET returns the car? Sam Ruby in his blog http://intertwingly.net/blog/2008/02/15/Embrace-Extend-then-Innovate wrote "Having some servers interpret the removal of elements (such as content) as a modification, and others interpret the requests in such a way that elided elements are to be left alone is hardly uniform or self-descriptive." I think this is easily disposed of. It isn't the server that interprets what to do, it should be defined by the media type. What do other people think of PUT for partial updates?
Why not inlining the image in a base64 encoded way? (didn't read the whole thread, maybe I missed the justification) > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest- > discuss@yahoogroups.com] On Behalf Of Marek Potociar > Sent: Dienstag, 12. April 2011 18:08 > To: ruben.verborgh > Cc: rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] opinions needed — multimedia algorithms the > REST way > > > > On 04/08/2011 01:44 PM, ruben.verborgh wrote: > > We could first POST the image to http://other.org/images and then > access the face detection service by GETting > > http://other.org/images/34/faces. > > However, this involves two calls to other.org. > > This is a RESTful way to do it, assuming the post returns 201 with a > link to the .../34/faces (IMHO). > > Marek > > > ------------------------------------ > > Yahoo! Groups Links > > >
I would use a representation body that supports a list of one or more files in either base64-encoded or a fully-qualified link to the file (HTML FORMS does this nicely). If you support both styles in a single state transfer then even clients that do not support base64-encoding (or cannot access local disk resources due to rights restrictions) will be able to supply the fully-qualified link. Then write a server that accepts the representation, processes the files (saves the base64 or navigates to the URI and saves that data), handles the recognition tasks and generates one or more new resources from the uploaded data. If the processing takes some time, the server can return 202 w/ a link the client can use to check on the progress of the server's work. If the processing is relatively quick, the server can return 201 with a location that points to the resulting resource created by the server (If more than one resource is created, I'd also create a single "top-level" resource that represents a set of links to all the other resources that were created by the client request). BTW - There is noting "REST-y" here; just basic HTTP stuff. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Tue, Apr 12, 2011 at 15:43, Markus KARG <markus@headcrashing.eu> wrote: > Why not inlining the image in a base64 encoded way? (didn't read the whole thread, maybe I missed the justification) > >> -----Original Message----- >> From: rest-discuss@yahoogroups.com [mailto:rest- >> discuss@yahoogroups.com] On Behalf Of Marek Potociar >> Sent: Dienstag, 12. April 2011 18:08 >> To: ruben.verborgh >> Cc: rest-discuss@yahoogroups.com >> Subject: Re: [rest-discuss] opinions needed — multimedia algorithms the >> REST way >> >> >> >> On 04/08/2011 01:44 PM, ruben.verborgh wrote: >> > We could first POST the image to http://other.org/images and then >> access the face detection service by GETting >> > http://other.org/images/34/faces. >> > However, this involves two calls to other.org. >> >> This is a RESTful way to do it, assuming the post returns 201 with a >> link to the .../34/faces (IMHO). >> >> Marek >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Bryan:
Regarding partial updates over HTTP, I do the following:
1) When I want to replace an existing resource, I use PUT + ETag on
the URI of the existing resource.
2) When I want to amend/repair an existing resource, I use PATCH +
ETag on the URI of the existing resource.
PUT
In the case of #1, the entity body I transfer contains a
representation that is a "whole" representation of the existing
resource. It may not have all the fields in the existing resource
(e.g. the resource id, the creation date, or other server-generated
values) but it is usually pretty easy to see that this entity
represents a resource. The media types I use for these entities is
often application/x-www-form-urlencoded, but I sometimes use
application/xml, application/{my-media-type}+xml, or JSON
representations, etc.
PATCH
In the case of #2, the entity body I transfer contains a
representation of a "DIFF" format that has instructions on how to
partially modify the existing resource. It is, essentially, a bag of
instructions on how to handle a modification of select elements of the
resource. It doesn't look anything like the existing resource. I
currently use a simple XML format as the custom media type for this
work. I use this XML diff format without regard to the native (or
negotiated) media type used for responses for the target resource.
I've also used the PATCH w/ DIFF pattern to apply to a set of
resources on the target serer (e.g. "replace discount rate on all
twelve instances of product X in the catalog", etc.).
In cases where the target server does not support the PATCH verb (but
_does_ support the application/diff+xml media type), the server will
support POST + checksum on a special URI (/products/;patch). This can
be less desirable cache-wise, but works just fine for most cases.
FINAL REMARKS
I keep these two separate (PUT=replace, PATCH=amend) primarily to make
it easier for devs to know what to expect for each case. When you want
to amend a view items for an existing resource, use PATCH and the DIFF
format. When you want to simply replace an existing resource, just use
PUT and the usual entity formats.
Hopefully this gives you some ideas on how to handle your cases.
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Tue, Apr 12, 2011 at 14:30, Bryan Taylor <bryan_w_taylor@...> wrote:
>
> I've been pondering whether or not it is OK to use PUT for a partial update of a
> resource. When I look at the HTTP spec it says "The PUT method requests that the
> enclosed entity be stored under the supplied Request-URI." This seems to be a
> little loosy goosy to me, as the entity is a represenation and the Request-URI
> identifies a resource and so the thing I create (the resource) is logically
> different than one particular representation of it. To illustrate this, suppose
> my media type allows an optional create-timestamp field and my server supplies
> the value using its own clock if the resource does not already exist. I might
> want to allow PUT operations to do updates, but not allow them to include the
> create-timestamp. In fact, I might explicitly define in the media type that PUT
> requests will be rejected if they include the create-timestamp.
>
> I don't think this violates any requirement for PUT that I can find.
> Specifically, I don't think I see anything that requires that a if I put
> representation X to resource Y and follow it by a GET that I have to get back
> the same representation. In fact, I don't think anything requires that if I use
> PUT I have to support GET at all, let alone with the same media type, let alone
> with the same representation provided by PUT. All that I see required is that
> PUT is idempotent and that the enclosed entity be stored under the supplied
> Request-URI, which doesn't seem (to me) to imply that it must be all that is
> stored. I would think that each media type can define the semantics of PUT in
> terms of how resource state is affected.
>
> If the above create-timestamp is OK, then why not PUT for wholesale partial
> updates? If I create a car resource with an engine within its state and then I
> PUT a new engine why can I not expect to store this engine under the original
> car resource, so that a subsequent GET returns the car?
>
> Sam Ruby in his blog
> http://intertwingly.net/blog/2008/02/15/Embrace-Extend-then-Innovate wrote
> "Having some servers interpret the removal of elements (such as content) as a
> modification, and others interpret the requests in such a way that elided
> elements are to be left alone is hardly uniform or self-descriptive." I think
> this is easily disposed of. It isn't the server that interprets what to do, it
> should be defined by the media type.
>
>
> What do other people think of PUT for partial updates?
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
This seems like a reasonable approach once you accept that you want to allow
PATCH. In my development organization, we've defined our uniform interface to be
the HTTP methods of RFC 2616, so PATCH isn't an option for me.
If you read the PATCH RFC, it starts out with the assertion "The PUT method is
already defined to overwrite a resource with a complete new body, and cannot be
reused to do partial changes." I'm not exactly sure why they believe the
"complete new body" part, and I don't think this even makes sense. The body of a
PUT is a representation, not a resource, so I don't know what it means to
overwrite a resource with a representation. Recall Fielding's definition of a
resource as a function that defines a time varying set of representations. I
don't see anything in section 9.6 of RFC 2616 that suggests that the body of the
PUT contains the "complete" resource state. It says "The PUT method requests
that the enclosed entity be stored under the supplied Request-URI." I note it
says "under" and not "as". I think PUT could be re-expressed as "take the
submitted representation and make the corresponding state of the resource
match".
Consider a resource that offers two representations, one in text and one in
SVG. The SVG is drawing and includes some text as a caption within the SVG
drawing. The text representation is just the caption. Why can I not PUT a new
text representation and expect it to change the caption? This illustrates that
GET returns a representation in a single negotiated media type, which is a the
value of projecting the resource onto that media type. I see no reason to
interpret PUT any differently than GET in this regard.
----- Original Message ----
From: mike amundsen <mamund@...>
To: Bryan Taylor <bryan_w_taylor@...>
Cc: rest-discuss@yahoogroups.com
Sent: Tue, April 12, 2011 5:58:22 PM
Subject: Re: [rest-discuss] PUT for partial update of an existing resource
Bryan:
Regarding partial updates over HTTP, I do the following:
1) When I want to replace an existing resource, I use PUT + ETag on
the URI of the existing resource.
2) When I want to amend/repair an existing resource, I use PATCH +
ETag on the URI of the existing resource.
PUT
In the case of #1, the entity body I transfer contains a
representation that is a "whole" representation of the existing
resource. It may not have all the fields in the existing resource
(e.g. the resource id, the creation date, or other server-generated
values) but it is usually pretty easy to see that this entity
represents a resource. The media types I use for these entities is
often application/x-www-form-urlencoded, but I sometimes use
application/xml, application/{my-media-type}+xml, or JSON
representations, etc.
PATCH
In the case of #2, the entity body I transfer contains a
representation of a "DIFF" format that has instructions on how to
partially modify the existing resource. It is, essentially, a bag of
instructions on how to handle a modification of select elements of the
resource. It doesn't look anything like the existing resource. I
currently use a simple XML format as the custom media type for this
work. I use this XML diff format without regard to the native (or
negotiated) media type used for responses for the target resource.
I've also used the PATCH w/ DIFF pattern to apply to a set of
resources on the target serer (e.g. "replace discount rate on all
twelve instances of product X in the catalog", etc.).
In cases where the target server does not support the PATCH verb (but
_does_ support the application/diff+xml media type), the server will
support POST + checksum on a special URI (/products/;patch). This can
be less desirable cache-wise, but works just fine for most cases.
FINAL REMARKS
I keep these two separate (PUT=replace, PATCH=amend) primarily to make
it easier for devs to know what to expect for each case. When you want
to amend a view items for an existing resource, use PATCH and the DIFF
format. When you want to simply replace an existing resource, just use
PUT and the usual entity formats.
Hopefully this gives you some ideas on how to handle your cases.
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2010
http://rest-fest.googlecode.com
On Tue, Apr 12, 2011 at 14:30, Bryan Taylor <bryan_w_taylor@...> wrote:
>
> I've been pondering whether or not it is OK to use PUT for a partial update of
>a
> resource. When I look at the HTTP spec it says "The PUT method requests that
>the
> enclosed entity be stored under the supplied Request-URI." This seems to be
>a
> little loosy goosy to me, as the entity is a represenation and the Request-URI
> identifies a resource and so the thing I create (the resource) is logically
> different than one particular representation of it. To illustrate this,
suppose
> my media type allows an optional create-timestamp field and my server supplies
> the value using its own clock if the resource does not already exist. I might
> want to allow PUT operations to do updates, but not allow them to include the
> create-timestamp. In fact, I might explicitly define in the media type that
PUT
> requests will be rejected if they include the create-timestamp.
>
> I don't think this violates any requirement for PUT that I can find.
> Specifically, I don't think I see anything that requires that a if I put
> representation X to resource Y and follow it by a GET that I have to get back
> the same representation. In fact, I don't think anything requires that if I
use
> PUT I have to support GET at all, let alone with the same media type, let
alone
> with the same representation provided by PUT. All that I see required is that
> PUT is idempotent and that the enclosed entity be stored under the supplied
> Request-URI, which doesn't seem (to me) to imply that it must be all that is
> stored. I would think that each media type can define the semantics of PUT in
> terms of how resource state is affected.
>
> If the above create-timestamp is OK, then why not PUT for wholesale partial
> updates? If I create a car resource with an engine within its state and then I
> PUT a new engine why can I not expect to store this engine under the original
> car resource, so that a subsequent GET returns the car?
>
> Sam Ruby in his blog
> http://intertwingly.net/blog/2008/02/15/Embrace-Extend-then-Innovate wrote
> "Having some servers interpret the removal of elements (such as content) as a
> modification, and others interpret the requests in such a way that elided
> elements are to be left alone is hardly uniform or self-descriptive." I think
> this is easily disposed of. It isn't the server that interprets what to do, it
> should be defined by the media type.
>
>
> What do other people think of PUT for partial updates?
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Hi all, Thanks for the replies. This domain is very new to me, so I'm eager to learn. The reason I would not use a base64-encoded body, is that I'd lose the resource oriented way of doing things. Or am I wrong here? I imagine such a request should go to http://example.com/facedetection, which is not really a resource. (Problem? Not a problem?) Or unless we see the algorithms themselves as resources, and go for http://example.com/algorithms/facedetection. No? I like the idea of 202, as some algorithms do indeed take some time. Ruben PS Why do you consider this not "REST-y" but basic HTTP stuff? I thought REST was about using basic HTTP to do things :) On 12 Apr 2011, at 22:19, mike amundsen wrote: > I would use a representation body that supports a list of one or more > files in either base64-encoded or a fully-qualified link to the file > (HTML FORMS does this nicely). > > If you support both styles in a single state transfer then even > clients that do not support base64-encoding (or cannot access local > disk resources due to rights restrictions) will be able to supply the > fully-qualified link. > > Then write a server that accepts the representation, processes the > files (saves the base64 or navigates to the URI and saves that data), > handles the recognition tasks and generates one or more new resources > from the uploaded data. > > If the processing takes some time, the server can return 202 w/ a link > the client can use to check on the progress of the server's work. If > the processing is relatively quick, the server can return 201 with a > location that points to the resulting resource created by the server > (If more than one resource is created, I'd also create a single > "top-level" resource that represents a set of links to all the other > resources that were created by the client request). > > BTW - There is noting "REST-y" here; just basic HTTP stuff. > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2010 > http://rest-fest.googlecode.com > > > > > On Tue, Apr 12, 2011 at 15:43, Markus KARG <markus@...> wrote: >> Why not inlining the image in a base64 encoded way? (didn't read the whole thread, maybe I missed the justification) >> >>> -----Original Message----- >>> From: rest-discuss@yahoogroups.com [mailto:rest- >>> discuss@yahoogroups.com] On Behalf Of Marek Potociar >>> Sent: Dienstag, 12. April 2011 18:08 >>> To: ruben.verborgh >>> Cc: rest-discuss@yahoogroups.com >>> Subject: Re: [rest-discuss] opinions needed — multimedia algorithms the >>> REST way >>> >>> >>> >>> On 04/08/2011 01:44 PM, ruben.verborgh wrote: >>>> We could first POST the image to http://other.org/images and then >>> access the face detection service by GETting >>>> http://other.org/images/34/faces. >>>> However, this involves two calls to other.org. >>> >>> This is a RESTful way to do it, assuming the post returns 201 with a >>> link to the .../34/faces (IMHO). >>> >>> Marek >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >>
No. http://trac.tools.ietf.org/wg/httpbis/trac/changeset/1158#file1 ....Roy
On 13 Apr, 2011,at 06:53 AM, Bryan Taylor <bryan_w_taylor@...> wrote:
>
> This seems like a reasonable approach once you accept that you want to allow
> PATCH. In my development organization, we've defined our uniform interface to be
> the HTTP methods of RFC 2616, 
Out of curiosity: why?
> so PATCH isn't an option for me.
Then use POST.
Jan
>
>
> If you read the PATCH RFC, it starts out with the assertion "The PUT method is
> already defined to overwrite a resource with a complete new body, and cannot be
> reused to do partial changes." I'm not exactly sure why they believe the
> "complete new body" part, and I don't think this even makes sense. The body of a
> PUT is a representation, not a resource, so I don't know what it means to
> overwrite a resource with a representation. Recall Fielding's definition of a
> resource as a function that defines a time varying set of representations. I
> don't see anything in section 9.6 of RFC 2616 that suggests that the body of the
> PUT contains the "complete" resource state. It says "The PUT method requests
> that the enclosed entity be stored under the supplied Request-URI." I note it
> says "under" and not "as". I think PUT could be re-expressed as "take the
> submitted representation and make the corresponding state of the resource
> match".
>
>
> Consider a resource that offers two representations, one in text and one in
> SVG. The SVG is drawing and includes some text as a caption within the SVG
> drawing. The text representation is just the caption. Why can I not PUT a new
> text representation and expect it to change the caption? This illustrates that
> GET returns a representation in a single negotiated media type, which is a the
> value of projecting the resource onto that media type. I see no reason to
> interpret PUT any differently than GET in this regard.
>
>
>
> ----- Original Message ----
> From: mike amundsen <mamund@...>
> To: Bryan Taylor <bryan_w_taylor@...>
> Cc: rest-discuss@yahoogroups.com
> Sent: Tue, April 12, 2011 5:58:22 PM
> Subject: Re: [rest-discuss] PUT for partial update of an existing resource
>
> Bryan:
>
> Regarding partial updates over HTTP, I do the following:
>
> 1) When I want to replace an existing resource, I use PUT + ETag on
> the URI of the existing resource.
> 2) When I want to amend/repair an existing resource, I use PATCH +
> ETag on the URI of the existing resource.
>
> PUT
> In the case of #1, the entity body I transfer contains a
> representation that is a "whole" representation of the existing
> resource. It may not have all the fields in the existing resource
> (e.g. the resource id, the creation date, or other server-generated
> values) but it is usually pretty easy to see that this entity
> represents a resource. The media types I use for these entities is
> often application/x-www-form-urlencoded, but I sometimes use
> application/xml, application/{my-media-type}+xml, or JSON
> representations, etc.
>
> PATCH
> In the case of #2, the entity body I transfer contains a
> representation of a "DIFF" format that has instructions on how to
> partially modify the existing resource. It is, essentially, a bag of
> instructions on how to handle a modification of select elements of the
> resource. It doesn't look anything like the existing resource. I
> currently use a simple XML format as the custom media type for this
> work. I use this XML diff format without regard to the native (or
> negotiated) media type used for responses for the target resource.
>
> I've also used the PATCH w/ DIFF pattern to apply to a set of
> resources on the target serer (e.g. "replace discount rate on all
> twelve instances of product X in the catalog", etc.).
>
> In cases where the target server does not support the PATCH verb (but
> _does_ support the application/diff+xml media type), the server will
> support POST + checksum on a special URI (/products/;patch). This can
> be less desirable cache-wise, but works just fine for most cases.
>
> FINAL REMARKS
> I keep these two separate (PUT=replace, PATCH=amend) primarily to make
> it easier for devs to know what to expect for each case. When you want
> to amend a view items for an existing resource, use PATCH and the DIFF
> format. When you want to simply replace an existing resource, just use
> PUT and the usual entity formats.
>
> Hopefully this gives you some ideas on how to handle your cases.
>
> mca
> http://amundsen.com/blog/
> http://twitter.com@mamund
> http://mamund.com/foaf.rdf#me
>
>
> #RESTFest 2010
> http://rest-fest.googlecode.com
>
>
>
>
> On Tue, Apr 12, 2011 at 14:30, Bryan Taylor <bryan_w_taylor@...> wrote:
> >
> > I've been pondering whether or not it is OK to use PUT for a partial update of
> >a
> > resource. When I look at the HTTP spec it says "The PUT method requests that
> >the
> > enclosed entity be stored under the supplied Request-URI." This seems to be
> >a
> > little loosy goosy to me, as the entity is a represenation and the Request-URI
> > identifies a resource and so the thing I create (the resource) is logically
> > different than one particular representation of it. To illustrate this,
> suppose
> > my media type allows an optional create-timestamp field and my server supplies
> > the value using its own clock if the resource does not already exist. I might
> > want to allow PUT operations to do updates, but not allow them to include the
> > create-timestamp. In fact, I might explicitly define in the media type that
> PUT
> > requests will be rejected if they include the create-timestamp.
> >
> > I don't think this violates any requirement for PUT that I can find.
> > Specifically, I don't think I see anything that requires that a if I put
> > representation X to resource Y and follow it by a GET that I have to get back
> > the same representation. In fact, I don't think anything requires that if I
> use
> > PUT I have to support GET at all, let alone with the same media type, let
> alone
> > with the same representation provided by PUT. All that I see required is that
> > PUT is idempotent and that the enclosed entity be stored under the supplied
> > Request-URI, which doesn't seem (to me) to imply that it must be all that is
> > stored. I would think that each media type can define the semantics of PUT in
> > terms of how resource state is affected.
> >
> > If the above create-timestamp is OK, then why not PUT for wholesale partial
> > updates? If I create a car resource with an engine within its state and then I
> > PUT a new engine why can I not expect to store this engine under the original
> > car resource, so that a subsequent GET returns the car?
> >
> > Sam Ruby in his blog
> > http://intertwingly.net/blog/2008/02/15/Embrace-Extend-then-Innovate wrote
> > "Having some servers interpret the removal of elements (such as content) as a
> > modification, and others interpret the requests in such a way that elided
> > elements are to be left alone is hardly uniform or self-descriptive." I think
> > this is easily disposed of. It isn't the server that interprets what to do, it
> > should be defined by the media type.
> >
> >
> > What do other people think of PUT for partial updates?
> >
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
> >
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
On 04/12/2011 10:19 PM, mike amundsen wrote: > > BTW - There is noting "REST-y" here; just basic HTTP stuff. ...using HTTP correctly is all about being REST-y :) After all, HTTP 1.1 was developed with REST architecture style in mind. Marek
On Wed, Apr 13, 2011 at 7:19 AM, algermissen1971 <algermissen1971@...> wrote: > > > On 13 Apr, 2011,at 06:53 AM, Bryan Taylor <bryan_w_taylor@...> wrote: > > > This seems like a reasonable approach once you accept that you want to allow > PATCH. In my development organization, we've defined our uniform interface > to be > the HTTP methods of RFC 2616, > > > Out of curiosity: why? > Keeping down barriers to adoption for clients? > > so PATCH isn't an option for me. > > > Then use POST. > Both of those are options (PATCH or POST) are non-idempotent, and therefore not equivalent. There's very little infrastructure that relies on this "full PUT" requirement. In contrast, there are real world applications in the wild that implement partial PUT, and they seem to work ok. What would be the issue(s) to respecify PUT as simply non-safe, idempotent requests that aren't DELETE? Cheers, Mike
That doesn't really clarify, for me, the value of that (over?) specification: "A successful PUT of a given representation would suggest that a subsequent GET on that same target resource will result in an equivalent representation being returned in a 200 (OK) response." .. starts to hint at a reason, but then immediately invalidates itself with: "However, there is no guarantee that such a state change will be observable" What is the point in making something that's non-observable by definition visible in the system? Cheers, Mike On Wed, Apr 13, 2011 at 6:47 AM, Roy T. Fielding <fielding@...> wrote: > No. > > http://trac.tools.ietf.org/wg/httpbis/trac/changeset/1158#file1 > > ....Roy > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Marek: <snip> >> BTW - There is noting "REST-y" here; just basic HTTP stuff. > > ...using HTTP correctly is all about being REST-y :) After all, HTTP 1.1 was developed with REST architecture style in mind. > </snip> There are lots of ways to use the HTTP transfer protocol correctly w/o adopting Fielding's REST architectural style; WebDAV is one standardized example. When the question is of the type "What's the best way to model this interaction...?" that does not _automatically_ rise to the level of a distributed network architectural issue. Granted we talk quite a bit on this list about the HTTP protocol; nothing bad there. However, it is important to keep in mind Fielding's style (like other architectural styles) is not only applicable to HTTP. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Wed, Apr 13, 2011 at 05:13, Marek Potociar <marek.potociar@...> wrote: > On 04/12/2011 10:19 PM, mike amundsen wrote: >> >> BTW - There is noting "REST-y" here; just basic HTTP stuff. > > ...using HTTP correctly is all about being REST-y :) After all, HTTP 1.1 was developed with REST architecture style in mind. > > Marek >
On 04/13/2011 02:42 PM, mike amundsen wrote: > When the question is of the type "What's the best way to model this > interaction...?" that does not _automatically_ rise to the level of a > distributed network architectural issue. The original question was about the RESTful way of modeling the interaction using HTTP, not about the best way to model such interaction. For the latter we don't have enough information anyway IMHO. > Granted we talk quite a bit on this list about the HTTP protocol; > nothing bad there. However, it is important to keep in mind Fielding's > style (like other architectural styles) is not only applicable to > HTTP. I cannot remember saying anything that would suggest that REST applies only to HTTP. I did say that HTTP 1.1 was designed with REST architecture style in mind, but that's a reverse implication. I certainly agree with you here. Marek
Hi, i'm searching for a REST API description language that can validate both, XML and JSON schemas. An alternative (maybe the more extended) is WADL. It can validate XML data with 'xsd' files called into 'grammars' target, but there is a problem with JSON validation, how can validate JSON data too? How can I validate a request to a WS that can be XML or JSON? There is other alternative better than WADL? Thanks
> The reason I would not use a base64-encoded body, is that I'd lose the > resource oriented way of doing things. > Or am I wrong here? In part. It depends on your business domain definition. If the photo cannot live standalone (i. e. it is a "real" child to some outer resource, like the photo on your driving licence cannot live without the licence card it is glued upon), then you can safely transfer it as base64 within the resource. But if have a lots of photos in a container and dynamically replace them amongst containing resources, then you must use references (URIs) instead. So, it is not about REST, but just about your business model. If you virtually always would load the photo in a second step after loading the driver's licence, the it is beneficial to prevent the additional GET and just transfer the photo in the same resource, is it is just a part of that resource and not a referenced "other thing". Regards Markus > > I imagine such a request should go to http://example.com/facedetection, > which is not really a resource. (Problem? Not a problem?) > Or unless we see the algorithms themselves as resources, and go for > http://example.com/algorithms/facedetection. No? > > I like the idea of 202, as some algorithms do indeed take some time. > > Ruben > > PS Why do you consider this not "REST-y" but basic HTTP stuff? I > thought REST was about using basic HTTP to do things :) > > On 12 Apr 2011, at 22:19, mike amundsen wrote: > > > I would use a representation body that supports a list of one or more > > files in either base64-encoded or a fully-qualified link to the file > > (HTML FORMS does this nicely). > > > > If you support both styles in a single state transfer then even > > clients that do not support base64-encoding (or cannot access local > > disk resources due to rights restrictions) will be able to supply the > > fully-qualified link. > > > > Then write a server that accepts the representation, processes the > > files (saves the base64 or navigates to the URI and saves that data), > > handles the recognition tasks and generates one or more new resources > > from the uploaded data. > > > > If the processing takes some time, the server can return 202 w/ a > link > > the client can use to check on the progress of the server's work. If > > the processing is relatively quick, the server can return 201 with a > > location that points to the resulting resource created by the server > > (If more than one resource is created, I'd also create a single > > "top-level" resource that represents a set of links to all the other > > resources that were created by the client request). > > > > BTW - There is noting "REST-y" here; just basic HTTP stuff. > > > > mca > > http://amundsen.com/blog/ > > http://twitter.com@mamund > > http://mamund.com/foaf.rdf#me > > > > > > #RESTFest 2010 > > http://rest-fest.googlecode.com > > > > > > > > > > On Tue, Apr 12, 2011 at 15:43, Markus KARG <markus@...> > wrote: > >> Why not inlining the image in a base64 encoded way? (didn't read the > whole thread, maybe I missed the justification) > >> > >>> -----Original Message----- > >>> From: rest-discuss@yahoogroups.com [mailto:rest- > >>> discuss@yahoogroups.com] On Behalf Of Marek Potociar > >>> Sent: Dienstag, 12. April 2011 18:08 > >>> To: ruben.verborgh > >>> Cc: rest-discuss@yahoogroups.com > >>> Subject: Re: [rest-discuss] opinions needed - multimedia algorithms > the > >>> REST way > >>> > >>> > >>> > >>> On 04/08/2011 01:44 PM, ruben.verborgh wrote: > >>>> We could first POST the image to http://other.org/images and then > >>> access the face detection service by GETting > >>>> http://other.org/images/34/faces. > >>>> However, this involves two calls to other.org. > >>> > >>> This is a RESTful way to do it, assuming the post returns 201 with > a > >>> link to the .../34/faces (IMHO). > >>> > >>> Marek > >>> > >>> > >>> ------------------------------------ > >>> > >>> Yahoo! Groups Links > >>> > >>> > >>> > >> > >> > >> > >> ------------------------------------ > >> > >> Yahoo! Groups Links > >> > >> > >> > >> > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Apr 13, 2011, at 3:19 AM, Mike Kelly wrote: > That doesn't really clarify, for me, the value of that (over?) specification: > > "A successful PUT of a given > representation would suggest that a subsequent GET on that same target > resource will result in an equivalent representation being returned in > a 200 (OK) response." > > .. starts to hint at a reason, but then immediately invalidates itself with: It is the definition of the method PUT. If you don't like it, choose a different method. > "However, there is no guarantee that such a state > change will be observable" > > What is the point in making something that's non-observable by > definition visible in the system? What is the point in building network-based systems, where such guarantees are impossible? ....Roy
Amen. On Apr 13, 2011, at 12:43 PM, Roy T. Fielding wrote: > On Apr 13, 2011, at 3:19 AM, Mike Kelly wrote: > >> That doesn't really clarify, for me, the value of that (over?) specification: >> >> "A successful PUT of a given >> representation would suggest that a subsequent GET on that same target >> resource will result in an equivalent representation being returned in >> a 200 (OK) response." >> >> .. starts to hint at a reason, but then immediately invalidates itself with: > > It is the definition of the method PUT. If you don't like it, > choose a different method. > >> "However, there is no guarantee that such a state >> change will be observable" >> >> What is the point in making something that's non-observable by >> definition visible in the system? > > What is the point in building network-based systems, where > such guarantees are impossible? > > ....Roy > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Wed, Apr 13, 2011 at 8:43 PM, Roy T. Fielding <fielding@...> wrote: > On Apr 13, 2011, at 3:19 AM, Mike Kelly wrote: > >> That doesn't really clarify, for me, the value of that (over?) specification: >> >> "A successful PUT of a given >> representation would suggest that a subsequent GET on that same target >> resource will result in an equivalent representation being returned in >> a 200 (OK) response." >> >> .. starts to hint at a reason, but then immediately invalidates itself with: > > It is the definition of the method PUT. If you don't like it, > choose a different method. > Which non-safe idempotent method is that? >> "However, there is no guarantee that such a state >> change will be observable" >> >> What is the point in making something that's non-observable by >> definition visible in the system? > > What is the point in building network-based systems, where > such guarantees are impossible? > Not a lot, but let's be clear; we're discussing one specific guarantee relating to the 'fullness' of a PUT representation - a property that's apparently impossible to observe on the network anyway, by its very definition. At least that's what I took by "no guarantee". So; if the fullness of a PUT representation is not observable, then what was the benefit of creating that restrictive definition? What infrastructure on the web is actually taking advantage of, or relying on, the fullness of PUT representations? Cheers, Mike
Mike Kelly wrote: > On Wed, Apr 13, 2011 at 8:43 PM, Roy T. Fielding <fielding@...> wrote: >> On Apr 13, 2011, at 3:19 AM, Mike Kelly wrote: >> >>> That doesn't really clarify, for me, the value of that (over?) specification: >>> >>> "A successful PUT of a given >>> representation would suggest that a subsequent GET on that same target >>> resource will result in an equivalent representation being returned in >>> a 200 (OK) response." >>> >>> .. starts to hint at a reason, but then immediately invalidates itself with: >> It is the definition of the method PUT. If you don't like it, >> choose a different method. >> > > Which non-safe idempotent method is that? > >>> "However, there is no guarantee that such a state >>> change will be observable" >>> >>> What is the point in making something that's non-observable by >>> definition visible in the system? >> What is the point in building network-based systems, where >> such guarantees are impossible? >> > > Not a lot, but let's be clear; we're discussing one specific guarantee > relating to the 'fullness' of a PUT representation - a property that's > apparently impossible to observe on the network anyway, by its very > definition. At least that's what I took by "no guarantee". > > So; if the fullness of a PUT representation is not observable, then > what was the benefit of creating that restrictive definition? What > infrastructure on the web is actually taking advantage of, or relying > on, the fullness of PUT representations? "no guarantee that such a state change will be observable" is *very* different to "impossible to observe". The specification characterizes each method, requires the bare minimum to be an effective transfer protocol, and tries not to limit usage. I personally find it quite clear. Can you think of a good reason why the specification would have to say "MUST guarantee that a state change is observable"? Is it to fit in with the notion that a resource has a state indicated by it's representations? a detail which would of course be hidden by the uniform interface. Would every now-valid usage and implementation of HTTP still be valid if that were a MUST?
I don't follow why you conclude "no". "The PUT method is used to request that the state of the target resource be created or replaced with the state defined by the representation enclosed in the request message payload." I don't think "replaced with" adequately contemplates resources with multiple representations. More generally, if the resource state space is bigger than the representation state space, this doesn't speak to the issue. If the intention is to specifically forbid PUT on such resources, I suggest making this explicit. If not, I don't see how we can ever expect a single representation to fully replace the resource state. The server should be free to gap fill this state, and the most obvious thing to do is to leave it alone. Indeed, the next sentence seems to imply you only require resetting resource state expressable in the representation. "A successful PUT of a given representation would suggest that a subsequent GET on that same target resource will result in an equivalent representation being returned in a 200 (OK) response." I gave an example of a resource that produced two representations, one in pure text another as SVG with a drawing and a caption matching the text. If I use PUT of the text to change the caption while remembering the drawing, I'm doing a partial update via PUT that complies with the letter and spirit of this sentence, am I not? Also, who defines "equivalent" here? If I as a media type author or server owner get to define this, I can milk interesting equivalence relations quite a long way. --- In rest-discuss@yahoogroups.com, "Roy T. Fielding" <fielding@...> wrote: > > No. > > http://trac.tools.ietf.org/wg/httpbis/trac/changeset/1158#file1 > > ....Roy >
--- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote: > So; if the fullness of a PUT representation is not observable, then > what was the benefit of creating that restrictive definition? What > infrastructure on the web is actually taking advantage of, or relying > on, the fullness of PUT representations? I hope someone will answer this question. If not PUT, then there is no idempotent method that allows partial updates. This is a clear gap. Given that some implementers already interpret PUT to allow partial updates, if there is value in a "full state overwrite" method, it seems better to add it under a different name and let PUT allow partial updates to validate the breadth of interpretations implemented in practice. This could be done by adding a RESET_TO method that is required to be memoryless. A memoryless operation is one such that any sequences of operations ending in it result in the same resource state. Memoryless operations are idempotent, but this a stronger condition.
On Thu, Apr 14, 2011 at 12:09 AM, bryan_w_taylor <bryan_w_taylor@...>wrote: > Given that some implementers already interpret PUT to allow partial > updates, if there is value in a "full state overwrite" method, it seems > better to add it under a different name and let PUT allow partial updates to > validate the breadth of interpretations implemented in practice. > Attempting to redefine previously published semantics is so 1984. Craig
On Thu, Apr 14, 2011 at 12:09 AM, bryan_w_taylor <bryan_w_taylor@...>wrote: > Given that some implementers already interpret PUT to allow partial > updates, if there is value in a "full state overwrite" method, it seems > better to add it under a different name and let PUT allow partial updates to > validate the breadth of interpretations implemented in practice. > Attempting to redefine existing semantics is so 1984. Craig
On Apr 14, 2011, at 10:56 AM, Craig McClanahan wrote: > > > Attempting to redefine previously published semantics is so 1984. :-) Why not raise these suggestions in the HTTPbis WG mailing list in the first place? Jan > > Craig > > > >
On Apr 14, 2011, at 9:09 AM, bryan_w_taylor wrote: > If not PUT, then there is no idempotent method that allows partial updates. This is a clear gap. No, it is not a gap, it is inherent in the nature of the problem. A partial update can simply never be idempotent because the meaning of the request is a function of the state of the resource. If not, we would not be talking about a partial update but a full update of a sub resource (aka setting a property). It would simply be better to make that property a resource in the first place and then use PUT. IOW, instead of using PUT wrongly for PUT /doc/1 "New Title" USe PUT /doc/1/title "New Title" See http://www.xent.com/pipermail/fork/2001-September/004712.html for an extremely elegant way to deal with this problem space. Doh, almost 10 years old and we are still talking about this stuff.... Jan
On Thu, Apr 14, 2011 at 12:40 AM, Nathan <nathan@...> wrote: > Mike Kelly wrote: >> >> Not a lot, but let's be clear; we're discussing one specific guarantee >> relating to the 'fullness' of a PUT representation - a property that's >> apparently impossible to observe on the network anyway, by its very >> definition. At least that's what I took by "no guarantee". >> >> So; if the fullness of a PUT representation is not observable, then >> what was the benefit of creating that restrictive definition? What >> infrastructure on the web is actually taking advantage of, or relying >> on, the fullness of PUT representations? > > "no guarantee that such a state change will be observable" is *very* > different to "impossible to observe". Theoretically. Please could you elaborate on how that 'clear' difference would be apparent on the web had the definition always been "impossible to observe". > The specification characterizes each method, requires the bare minimum to be > an effective transfer protocol, and tries not to limit usage. I personally > find it quite clear. Well the contention here is that it is *not* actually the bare minimum because it's over-specifies PUT in a way which is not useful in practice, and (on paper) prevents something that is useful; i.e. partial idempotent updates. > Can you think of a good reason why the specification would have to say "MUST > guarantee that a state change is observable"? No. The spec is clear on why the state change cannot be observable, and I agree with it. That is exactly why nothing would be lost by changing the definition to allow partial representations, because the ambiguity is already present in the interaction. Cheers, Mike
On Thu, Apr 14, 2011 at 10:15 AM, Jan Algermissen
<algermissen1971@...> wrote:
>
> IOW, instead of using PUT wrongly for
>
> PUT /doc/1
>
> "New Title"
>
>
> USe
>
> PUT /doc/1/title
>
> "New Title"
>
>
2 things:
1. That's subjective.. your solution compromises visibility of the
interaction with the /doc/1 resource, since it's state is now changed
invisibly via PUT /doc/1/title. e.g. you just made cache invalidation
more difficult.
2. Why is it non-idempotent for the client to PUT { title: "new title"
} to /doc/1 ? That request will always apply the same state
transition, even if the resultant state (which is not observable,
anyway) varies over time.
Cheers,
Mike
On 14 Apr, 2011,at 11:54 AM, Mike Kelly <mike@...> wrote:
On Thu, Apr 14, 2011 at 10:15 AM, Jan Algermissen
<algermissen1971@...> wrote:
>
> IOW, instead of using PUT wrongly for
>
> PUT /doc/1
>
> "New Title"
>
>
> USe
>
> PUT /doc/1/title
>
> "New Title"
>
>
2 things:
1. That's subjective.. your solution compromises visibility of the
interaction with the /doc/1 resource, since it's state is now changed
invisibly via PUT /doc/1/title. e.g. you just made cache invalidation
more difficult.
Hmm, no. Just use Content-Locatioon in the response:
PUT /doc/1/title
"New Title"
200 Ok
Content-Location: /doc/1
<doc><title>New Title</title></doc>
(I keep being extremely fascinated by HTTP, I must say)
2. Why is it non-idempotent for the client to PUT { title: "new title"
} to /doc/1 ?
I am saying that this is not a partial update, but a request whose target resource is not the one in the URI.
Jan
That request will always apply the same state
transition, even if the resultant state (which is not observable,
anyway) varies over time.
Cheers,
Mike
------------------------------------
Yahoo! Groups Links
On Thu, Apr 14, 2011 at 10:15 AM, Jan Algermissen <algermissen1971@...> wrote: > > On Apr 14, 2011, at 10:56 AM, Craig McClanahan wrote: > >> >> >> Attempting to redefine previously published semantics is so 1984. > > :-) > > Why not raise these suggestions in the HTTPbis WG mailing list in the first place? > fwiw I find this is quite an interesting, and practical, exploration of stuff to consider when playing with the uniform interface and layered constraints. Cheers, Mike
On Thu, Apr 14, 2011 at 11:04 AM, algermissen1971 <algermissen1971@...> wrote: > > > On 14 Apr, 2011,at 11:54 AM, Mike Kelly <mike@...> wrote: > > On Thu, Apr 14, 2011 at 10:15 AM, Jan Algermissen > <algermissen1971@...> wrote: >> >> IOW, instead of using PUT wrongly for >> >> PUT /doc/1 >> >> "New Title" >> >> >> USe >> >> PUT /doc/1/title >> >> "New Title" >> >> > > 2 things: > > 1. That's subjective.. your solution compromises visibility of the > interaction with the /doc/1 resource, since it's state is now changed > invisibly via PUT /doc/1/title. e.g. you just made cache invalidation > more difficult. > > > Hmm, no. Just use Content-Locatioon in the response: > PUT /doc/1/title > "New Title" > 200 Ok > Content-Location: /doc/1 > <doc><title>New Title</title></doc> > > (I keep being extremely fascinated by HTTP, I must say) > Yes, not impossible - just more difficult. What do you do if there's some collection resource the document is embedded in that also needs invalidating? What does a client do in a situation where it wants to update several distinct 'parts' of the composite resource at once - multiple requests? Is HTTP well suited to the kind of fine-grain interactions that approach encourages? How grainy should you go? etc. etc. Might be simpler in a lot of cases to just do partial PUT. There's no reason your _hypermedia_ could not have the ability to enforce full PUT interactions where necessary in your application, but I still cannot see a solid reason to have HTTP attempting to enforce this across the whole of the web. Cheers, Mike
We've had the conversation on cache invalidation many times. There is no answer there. If you change a resource, and parts of that resource are in a collection somewhere, the http interface doesn't provide for invalidating the "associated" collections. That makes the title example on par with collections, same problem, no solution provided by http. The closest you'll get is the proposal by Mark (I think) to extend caching for the reverse proxy case, where you control said reverse proxy. There's content there http://www.mnot.net/blog/Caching/ That still won't solve any intermediary, so I think it's safe to say at this stage that this is a "problem" that has always existed, won't be resolved, and no amount of restructuring of PUT will solve it, so I'd argue that there is no argument for or against title resources that scales in complexity beyond the local optimization of using Content-Location. -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Mike Kelly Sent: 14 April 2011 11:42 To: algermissen1971 Cc: bryan_w_taylor; rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: PUT for partial update of an existing resource On Thu, Apr 14, 2011 at 11:04 AM, algermissen1971 <algermissen1971@...> wrote: > > > On 14 Apr, 2011,at 11:54 AM, Mike Kelly <mike@mykanjo.co.uk> wrote: > > On Thu, Apr 14, 2011 at 10:15 AM, Jan Algermissen > <algermissen1971@...> wrote: >> >> IOW, instead of using PUT wrongly for >> >> PUT /doc/1 >> >> "New Title" >> >> >> USe >> >> PUT /doc/1/title >> >> "New Title" >> >> > > 2 things: > > 1. That's subjective.. your solution compromises visibility of the > interaction with the /doc/1 resource, since it's state is now changed > invisibly via PUT /doc/1/title. e.g. you just made cache invalidation > more difficult. > > > Hmm, no. Just use Content-Locatioon in the response: > PUT /doc/1/title > "New Title" > 200 Ok > Content-Location: /doc/1 > <doc><title>New Title</title></doc> > > (I keep being extremely fascinated by HTTP, I must say) > Yes, not impossible - just more difficult. What do you do if there's some collection resource the document is embedded in that also needs invalidating? What does a client do in a situation where it wants to update several distinct 'parts' of the composite resource at once - multiple requests? Is HTTP well suited to the kind of fine-grain interactions that approach encourages? How grainy should you go? etc. etc. Might be simpler in a lot of cases to just do partial PUT. There's no reason your _hypermedia_ could not have the ability to enforce full PUT interactions where necessary in your application, but I still cannot see a solid reason to have HTTP attempting to enforce this across the whole of the web. Cheers, Mike ------------------------------------ Yahoo! Groups Links
On Thu, Apr 14, 2011 at 1:48 PM, Sebastien Lambla <seb@...> wrote: > We've had the conversation on cache invalidation many times. There is no answer there. If you change a resource, and parts of that resource are in a collection somewhere, the http interface doesn't provide for invalidating the "associated" collections. > > That makes the title example on par with collections, same problem, no solution provided by http. > > The closest you'll get is the proposal by Mark (I think) to extend caching for the reverse proxy case, where you control said reverse proxy. There's content there http://www.mnot.net/blog/Caching/ > > That still won't solve any intermediary, so I think it's safe to say at this stage that this is a "problem" that has always existed, won't be resolved, and no amount of restructuring of PUT will solve it, so I'd argue that there is no argument for or against title resources that scales in complexity beyond the local optimization of using Content-Location. > I know that, I worked on a solution to this problem in parallel to Mark's own work on LCI. Actually, the HTTP interface does provide a very limited way of dealing with that invalidation problem - as Jan pointed out you can use CL and Location headers.. the point here being that it is _limited_ and therefore the more convoluted you make your 'dependency graph' with this super-composite approach, the less likely you are going to be able to deal with it in a standard/visible way, and the more you will have to lean on extra mechanisms such as the one Mark and I came up with - at a cost to visibility. Cheers, Mike
--- In rest-discuss@yahoogroups.com, Craig McClanahan <craigmcc@...> wrote: > Attempting to redefine previously published semantics is so 1984. Tell that to Roy. He's the one proposing the change. I don't think there is anything wrong with what he is doing. If an existing definition is ambiguous and there are multiple ways it has been interpreted that are different, I think it's completely reasonable to clarify and adopt new language in the definition that is consistent with a pre-existing interpretation.
On 14.04.2011 09:09, bryan_w_taylor wrote: > > > --- In rest-discuss@yahoogroups.com > <mailto:rest-discuss%40yahoogroups.com>, Mike Kelly <mike@...> wrote: > > > So; if the fullness of a PUT representation is not observable, then > > what was the benefit of creating that restrictive definition? What > > infrastructure on the web is actually taking advantage of, or relying > > on, the fullness of PUT representations? > > I hope someone will answer this question. > > If not PUT, then there is no idempotent method that allows partial > updates. This is a clear gap. Given that some implementers already > ... You can make PATCH idempotent by adding "If-Match". Best regards, Julian
On 13.04.2011 12:02, Mike Kelly wrote: > On Wed, Apr 13, 2011 at 7:19 AM, algermissen1971 > <algermissen1971@... <mailto:algermissen1971%40mac.com>> wrote: > > > > > > On 13 Apr, 2011,at 06:53 AM, Bryan Taylor <bryan_w_taylor@... > <mailto:bryan_w_taylor%40yahoo.com>> wrote: > > > > > > This seems like a reasonable approach once you accept that you want > to allow > > PATCH. In my development organization, we've defined our uniform > interface > > to be > > the HTTP methods of RFC 2616, > > > > > > Out of curiosity: why? > > > > Keeping down barriers to adoption for clients? Out of curiosity: which clients? > > so PATCH isn't an option for me. > > > > > > Then use POST. > > > > Both of those are options (PATCH or POST) are non-idempotent, and > therefore not equivalent. Add If-Match. > There's very little infrastructure that relies on this "full PUT" > requirement. In contrast, there are real world applications in the > wild that implement partial PUT, and they seem to work ok. > > What would be the issue(s) to respecify PUT as simply non-safe, > idempotent requests that aren't DELETE? Not sure I understand. Either you care about what the spec says, and then you shouldn't use PUT for partial updates (yes, according to 2616). Or you don't, in which case you can do whatever will work in practice, which depends on clients, servers, and intermediaries. If you control them, or they work for you the way you expect them to do, great. Best regards, Julian
On Thu, Apr 14, 2011 at 2:53 PM, Julian Reschke <julian.reschke@...> wrote: > On 13.04.2011 12:02, Mike Kelly wrote: >> >> On Wed, Apr 13, 2011 at 7:19 AM, algermissen1971 >> <algermissen1971@... <mailto:algermissen1971%40mac.com>> wrote: >> > >> > >> > On 13 Apr, 2011,at 06:53 AM, Bryan Taylor <bryan_w_taylor@... >> <mailto:bryan_w_taylor%40yahoo.com>> wrote: >> > >> > >> > This seems like a reasonable approach once you accept that you want >> to allow >> > PATCH. In my development organization, we've defined our uniform >> interface >> > to be >> > the HTTP methods of RFC 2616, >> > >> > >> > Out of curiosity: why? >> > >> >> Keeping down barriers to adoption for clients? > > Out of curiosity: which clients? > >> > so PATCH isn't an option for me. >> > >> > >> > Then use POST. >> > >> >> Both of those are options (PATCH or POST) are non-idempotent, and >> therefore not equivalent. > > Add If-Match. > >> There's very little infrastructure that relies on this "full PUT" >> requirement. In contrast, there are real world applications in the >> wild that implement partial PUT, and they seem to work ok. >> >> What would be the issue(s) to respecify PUT as simply non-safe, >> idempotent requests that aren't DELETE? > > Not sure I understand. > > Either you care about what the spec says, and then you shouldn't use PUT for > partial updates (yes, according to 2616). > > Or you don't, in which case you can do whatever will work in practice, which > depends on clients, servers, and intermediaries. If you control them, or > they work for you the way you expect them to do, great. > I'm looking for a reason (besides 'thats how it is specified') as to _why_ all PUT representations must be complete and cannot be partial. At the moment it's a bit like we're in a nice park but there's some strange guy who's drawn a line down the center and stood in a middle with a sign that says "You may only use this half of the park -->". So you go up to him and enquire as to why you are only allowed to half of a perfectly good park. He responds, saying "Because this [pointing at words] is written on the sign". You follow up "Ok.. why is that written on the sign?", and he responds "Because you may only use half of the park." - and round and round you go.. THE END Cheers, Mike
> > If not PUT, then there is no idempotent method that allows partial > updates. > Because the semantics of a partial update are not idempotent. PATCH has partial-update semantics; if you wish to constrain its operation to be idempotent, then use If-Match, as Julian points out -- don't overload PUT and expect anyone to deduce "partial update" by looking at the method name, because you've broken the self-descriptiveness constraint. -Eric
Mike Kelly wrote: > > ...when playing with the uniform interface... > Not what this is; as Craig points out, what's being played with here is semantics, in an effort to lend REST-cred where it isn't warranted -- that overloading PUT to also mean partial update violates self-descriptiveness, is a horse that's been beaten to death here. -Eric
Mike Kelly wrote: > > Might be simpler in a lot of cases to just do partial PUT. There's no > reason your _hypermedia_ could not have the ability to enforce full > PUT interactions where necessary in your application... > But there's every reason. A Uniform Interface means that method semantics don't vary by media type. A media type may restrict PUT to always have the semantics of either creation or replacement, but it is not within the scope of media type definitions to change the generic semantics of PUT to encompass partial update, when those semantics are not assigned to PUT by HTTP. Varying method semantics by media type is never self-descriptive. -Eric
Mike Kelly wrote: > > I'm looking for a reason (besides 'thats how it is specified') as to > _why_ all PUT representations must be complete and cannot be partial. > But that *is* the reason. HTTP assigns different semantics to different methods; this is the crux of self-descriptive messaging. So the semantics of full replacement need to be assigned to one method (PUT), while the semantics of partial replacement need to be assigned to some other method (PATCH). POST isn't used for retrieval in REST, because retrieval semantics are limited to GET. So of course you can't go assigning whatever semantics you desire to PUT, because doing so results in a protocol that doesn't meet the self-descriptive messaging constraint. -Eric
> > Sorry, humour me; are you implying that overloading PUT to also mean > partial update does or does not violate self-descriptiveness? > Does. -Eric
"bryan_w_taylor" wrote: > > Tell that to Roy. He's the one proposing the change. > Roy is clarifying PUT, not proposing its semantics be changed to encompass partial update. Sender intent, folks! Sure, servers receiving a PUT may only honor the request partially, but the sender intent of PUT is never partial update, only creation or replacement, as defined by HTTP. The only way to discern from the message headers that the sender intent is partial update, is for the method to be PATCH, which is the only method that's self-descriptive of that sender intent. -Eric
Ok. What does the self-descriptive fullness of a PUT representation actually bring to the table? i.e. what intermediary mechanisms rely _specifically_ on PUT requests being complete representations? Cheers, Mike On Thu, Apr 14, 2011 at 4:16 PM, Eric J. Bowman <eric@...> wrote: >> >> Sorry, humour me; are you implying that overloading PUT to also mean >> partial update does or does not violate self-descriptiveness? >> > > Does. > > -Eric >
Mike Kelly wrote: > > What does the self-descriptive fullness of a PUT representation > actually bring to the table? > Self-descriptiveness is at the heart of REST, so you're asking what REST brings to the table. > > i.e. what intermediary mechanisms rely _specifically_ on PUT requests > being complete representations? > The point is that you can't know. All you can do is assign different semantics to different methods, i.e. create a network-based API such that anyone may optimize the performance of their system by optimizing within the model provided by the uniform interface. -Eric
"bryan_w_taylor" wrote: > > Given that some implementers already interpret PUT to allow partial > updates, if there is value in a "full state overwrite" method, it > seems better to add it under a different name and let PUT allow > partial updates to validate the breadth of interpretations > implemented in practice. > By that logic, GET should have been redefined to also sometimes mean DELETE, and a new really-mean-GET-this-time method created. Before Google Web Accelerator came along and deleted everyone's blogs, forums and wikis, using GET to DELETE was quite common. The REST solution is self-descriptive messaging, in HTTP this means PUT has create/replace semantics and PATCH as replacement semantics (or, if you can't/won't use PATCH for some reason, use POST), never using GET for unsafe interactions, etc. -Eric
On Thu, Apr 14, 2011 at 5:01 PM, Eric J. Bowman <eric@...> wrote: > Mike Kelly wrote: >> >> What does the self-descriptive fullness of a PUT representation >> actually bring to the table? >> > > Self-descriptiveness is at the heart of REST, so you're asking what > REST brings to the table. No >> >> i.e. what intermediary mechanisms rely _specifically_ on PUT requests >> being complete representations? >> > > The point is that you can't know. All you can do is assign different > semantics to different methods, i.e. create a network-based API such > that anyone may optimize the performance of their system by optimizing > within the model provided by the uniform interface. > You've lost me there.. You're saying that the distinction of PUT requests as a complete representation cannot be changed in the name of self-descriptiveness, and yet you can't even provide one example of how that could be put to use? That doesn't sound right, somehow. Cheers, Mike Cheers, Mike
Mike Kelly wrote: > > You're saying that the distinction of PUT requests as a complete > representation cannot be changed in the name of self-descriptiveness, > and yet you can't even provide one example of how that could be put to > use? > I could, but I don't have to, because the alternative is *not* being able to make use of that distinction. See WebSockets for an example of a protocol which is not self-descriptive of sender intent, and cannot be scaled/optimized around clear semantic distinction between request methods -- that's the opposite of REST's goals as expressed in HTTP. -Eric
On Thu, Apr 14, 2011 at 5:29 PM, Eric J. Bowman <eric@...> wrote: > Mike Kelly wrote: >> >> You're saying that the distinction of PUT requests as a complete >> representation cannot be changed in the name of self-descriptiveness, >> and yet you can't even provide one example of how that could be put to >> use? >> > > I could, but I don't have to, because the alternative is *not* being > able to make use of that distinction. That distinction is of the completeness of a PUT request. The alternative is to remove that requirement and instead have the simple semantic of "non-safe and idempotent", which is still self-descriptive in its own way. So the question is how much descriptiveness has been lost, and the answer implied by the lack of examples is "basically nothing". If the semantics of PUT happened to also include "client smells like flowers but eats baked beans; the request is dubious.", and we were discussing the removal of that semantic would you seriously contest it on the basis that it 'removes self descriptiveness'? Cheers, Mike
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > Mike Kelly wrote: > > > > What does the self-descriptive fullness of a PUT representation > > actually bring to the table? > > > > Self-descriptiveness is at the heart of REST, so you're asking what > REST brings to the table. REST can work just fine with a different uniform interface that allows an idempotent partial update operation. This is a question about what HTTP is, not REST. There are two competing interpretations of what HTTP PUT is. Whichever wins out, either would be self-descriptive.
Mike Kelly wrote: > > That distinction is of the completeness of a PUT request... still > self-descriptive in its own way... > Now you're playing with semantics, not the interface. Whether an origin server treats the PUT as "complete" or not, is an implementation detail hidden behind the uniform interface. If the sender intent is a partial update, then depending on the origin server interpreting PUT in that fashion makes your API library-based, not network-based; the shared understanding between your client and your server is specific to your system, and is not the understanding shared by the network-at- large... IOW, the interface is not uniform. The alternative is to clearly label the sender intent of partial update by using the PATCH method, *that* is self-descriptive messaging. If you don't want to do REST, fine, just don't call it REST or try to convince me that nobody will ever need to distinguish replacement from partial-update in HTTP systems, so overloading PUT is just-as-good-as REST because *you* don't need to distinguish between replacement and partial-update in *your* systems. -Eric
On Apr 14, 2011, at 7:03 PM, bryan_w_taylor wrote: > > REST can work just fine with a different uniform interface that allows an idempotent partial update operation. This is a question about what HTTP is, not REST. There are two competing interpretations of what HTTP PUT is. Whichever wins out, either would be self-descriptive. Let's try it this way: What would the definition of a method be that does partial update and is idempotent? Keep in mind that you cannot make the definition depend on a media type. Hence you can only specify that the server has to apply the entity body as a partial update *according to the semantics of the media type used*. How can you guarantee that this is always idempotent? (IOW, you cannot be more specific for partial updates than PATCH already is.) Jan > > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Thu, Apr 14, 2011 at 1:24 PM, Jan Algermissen <algermissen1971@...>wrote: > > Keep in mind that you cannot make the definition depend on a media type. > Hence you can only specify that the server has to apply the entity body as a > partial update *according to the semantics of the media type used*. How can > you guarantee that this is always idempotent? > > So would it be fair to say that sending a "full replacement" is more like guidance to ensure that the request is idempotent, rather than an independent constrain of its own? If I designed a media type in such a way that I could send partial updates that were guaranteed to be idempotent (note: I have no idea if this is possible) then I could ignore the "guidance"? Darrel
I believe this community is capable of holding itself to a higher standard than "because the spec says so", or "that's the way we have always done it", or even "we might need it one day". It is good that the spec has been expanded to explicitly state that partial updates are not allowed with PUT. However, I think it would be valuable for the community to understand why that constraint exists. I find it particularly difficult to see how there can be benefits derived from the knowledge that a complete replacement is being performed in light of the fact, that in reality it rarely happens. In most cases there are some elements of the resource that are controlled completely by the origin server. It has been stated on this list before that those elements that are not under the control of the the user-agent do not need to be included to satisfy the "complete replacement" semantics. However, I would really like to understand the benefits that we can gain from the compliance to the "full replacement" semantics. What I don't want is to be faced with explaining how PUT works to someone who is new to REST and have to answer "I don't know why, but the spec says we must do it this way".
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > Because the semantics of a partial update are not idempotent. That's false. A function f:X->X is idempotent if f(f(x)) = f(x). If X=AxB and f sets the 2nd element to b0, so f(a,b) = (a, b0) then f(f(a,b))= f(a,b0) = (a,b0) = f(a,b) so f is idempotent. And f is clearly a "partial update", it updates the second element and not the first.
On Apr 14, 2011, at 10:44 AM, Darrel Miller wrote: > > I believe this community is capable of holding itself to a higher standard than "because the spec says so", or "that's the way we have always done it", or even "we might need it one day". Actually it ought be because the spec says so. The point of protocols like HTTP is interoperability. We can change rules based on what we think is better or worse, but every time we do so, we are weakening interoperability. Subbu
I'm not suggesting that we ignore the spec. In fact I completely agree that because it is in the spec we should respect it. However, the question is why does the constraint exist? "because it is in the spec" is circular reasoning. Darrel
--- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > By that logic, GET should have been redefined to also sometimes mean > DELETE, and a new really-mean-GET-this-time method created. Before > Google Web Accelerator came along and deleted everyone's blogs, forums > and wikis, using GET to DELETE was quite common. The REST solution is > self-descriptive messaging, in HTTP this means PUT has create/replace > semantics and PATCH as replacement semantics (or, if you can't/won't > use PATCH for some reason, use POST), never using GET for unsafe > interactions, etc. I really don't follow this. Nobody interprets GET to mean DELETE. I am saying you can read the spec for PUT and reasonably interpret the words and conclude that a partial update is allowed. If somebody is using GET for unsafe acts they are ignoring the spec.
"bryan_w_taylor" wrote: > > > By that logic, GET should have been redefined to also sometimes mean > > DELETE, and a new really-mean-GET-this-time method created. Before > > Google Web Accelerator came along and deleted everyone's blogs, > > forums and wikis, using GET to DELETE was quite common. The REST > > solution is self-descriptive messaging, in HTTP this means PUT has > > create/replace semantics and PATCH as replacement semantics (or, if > > you can't/won't use PATCH for some reason, use POST), never using > > GET for unsafe interactions, etc. > > I really don't follow this. Nobody interprets GET to mean DELETE. I > am saying you can read the spec for PUT and reasonably interpret the > words and conclude that a partial update is allowed. If somebody is > using GET for unsafe acts they are ignoring the spec. > Well, I've never followed the logic of optimizing PUT to save a few bytes -- I PUT atom:id's all the time, even though the server ignores them, because validating the payload using standard libraries requires atom:id. The main point of REST is to optimize the hell out of GET, because that's by far the bulk of request traffic. If adhering to REST costs me a few bytes on PUT for the sake of self-descriptiveness and visibility of my API overall, big deal. In terms of my PUT traffic tagged as application/atom+xml, it's exactly that, not an invalid subset. If I omit atom:id, then the correct media type would be application/xml, but my goals for self-descriptiveness and visibility are higher than that -- using a valid Atom payload on PUT self-descriptively screams at the world that my API is Atom Protocol (FWIW, which is serendipitous re-use). Overloading PUT doesn't help others to understand your code -- it actively works against it, because you're going against spec. Trust me, before GWA, many major CMS products were using GET to delete stuff regardless of what the spec said. There's always unseen danger in ignoring interoperability concerns (i.e. spec compliance) for short-term ease of implementation. It's like Rummy said, it's all about knowing your unknowns, vs. not knowing your unknowns. So, if there's some intermediary out there configured to validate payloads before allowing PUT/PATCH, I have no worries. If I assume I'll never have this problem and overload PUT, I wind up with a real mess on my hands if a client wants to implement exactly that security measure on their intranet firewall, and my work is found wanting for not being up to spec. Like so many were by overloading GET for deletion. > > I am saying you can read the spec for PUT and reasonably interpret the > words and conclude that a partial update is allowed. > Exactly. The server can discard atom:id, but messaging is self- descriptive of sender intent, not server processing. The sender PUT a valid Atom document to an interface which, on GET, returns that same Atom document (without allowing client-side control of atom:id). What the server does or doesn't do with atom:id is an implementation detail hidden behind the interface, and such details aren't exposed on the wire, so there's no reason atom:id must be omitted from PUT payloads if the server intends to ignore it. My messaging over the network exactly describes sender intent, which is to replace the content of the resource, with that content formatted as an Atom Entry document -- not to partially-update everything but the atom:id of some representation of the resource. I've discussed before how, if I'm only changing the categories of an atom:entry, PATCH is used with application/atomcat+xml to indicate that only the atom:category tags are overwritten. The messaging is self- descriptive, in that a payload of one media type is used to partially update a resource which responds to GET with another media type, reflecting the sender intent of partially updating the content of the resource. You can figure it out by driving the hypertext interface while monitoring the HTTP headers, like any good REST app. So a lot of pragmatic REST development involves following the HTTP spec, instead of making semantic arguments looking for loopholes in the wording. The purpose of PUT is not partial update, that's always been the job of PATCH, because HTTP (mostly) implements RESTful self- descriptive messaging by assigning different sender intent to different methods. The arguments I've put forth aren't "because the specs say so," I've been trying to explain why the specs are written the way they are -- because they're REST applied to protocol design. -Eric
--- In rest-discuss@yahoogroups.com, Subbu Allamaraju <subbu@...> wrote: > Actually it ought be because the spec says so. The point of protocols like HTTP is interoperability. We can change rules based on what we think is better or worse, but every time we do so, we are weakening interoperability. I actually agree that it should be the language of the spec that controls what happens. Not lore about it, but the actual spec. Many people are asserting that the spec outlaws partial updates. I'm reading the spec and think it allows a certain form of partial update. I'm not really arguing about what the spec should be, I'm arguing about what actually follows from it and disputing the conclusion other reach. Consider a resource defined by a multicolumn column row in a database, say its Joe's record in a person table. Consider two subsets A and B of column values within Joe's record, say A includes the PK, his name and address and B includes the PK his name and Joe's job information. At this company, media types for addresses and job descriptions already exist, and no one can prevent me from using these to represent Joe's data. This server chooses not to define any derived resources -- ie no refinements. Adding refinements is arguably good design, but it is certainly not required design. Like it or not, this server doesn't support it and you access all information about Joe using Joe's URI, http://example.com/Joe and the server uses content negotiation that accepts application/address+xml or application/job+xml as media types. The server implements PUT by updating or inserting a row as needed with the column values specified. Default values for each column. For existing rows, PUT modifies all columns and only those columns depicted in each media type (A for address, B for job). Doing this results in an operation that is idempotent, as repeated PUT of address or job data causes it to be set and after its set doing it again doesn't change any column value. Both operations are also what I call invertible, meaning that after a PUT, the resource state could be used to reconstruct an equivalent representation to the last PUT. I believe that the definition of PUT should be an idempotent, invertible write operation. This is similar, but slightly different to what Roy is getting at in his draft changes to PUT, with one big difference. He actually expects GET to invert PUT. I don't like that because I should be free to do PUT of application/address+xml and application/job+xml but to only support whole row GET via application/person+xml. Note access controls might motivate me to split up the write operations this way differently from the reads.
On Fri, Apr 15, 2011 at 12:20 AM, bryan_w_taylor <bryan_w_taylor@...> wrote: > I really don't follow this. Nobody interprets GET to mean DELETE. I am saying you can read the spec for PUT and reasonably interpret the words and conclude that a partial update is allowed. If somebody is using GET for unsafe acts they are ignoring the spec. No, that's not true. It's *not* a violation of the spec for a server to take an action that falls outside what is described in the HTTP spec. That's because that's an implementation detail and has nothing to do with the *interface*. What *is* a violation, is for the server (or any other component) to interpret that the client requested this behaviour. Said another way, it's perfectly reasonable for a server to perform a partial update when receiving a PUT request, but it's *not* ok for the client to send a PUT request with this same expectation. Mark.
Hi guys, was just wondering that. I personally think it's more HTTP-loving than RESTful; maybe I'm missing something but I don't see any hypermedia tenet in Couch. -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
On 19 Apr, 2011,at 12:06 PM, Alessandro Nadalin <alessandro.nadalin@gmail.com> wrote: > Hi guys, > > was just wondering that. > > I personally think it's more HTTP-loving than RESTful; maybe I'm > missing something but I don't see any hypermedia tenet in Couch. My take on it (but maybe outdated by now).  http://algermissen.blogspot.com/2010/02/classifying-couchdb-api.html Jan > > > -- > Nadalin Alessandro > www.odino.org > www.twitter.com/_odino_ > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Tue, Apr 19, 2011 at 12:35 PM, algermissen1971 <algermissen1971@...> wrote: > > > On 19 Apr, 2011,at 12:06 PM, Alessandro Nadalin > <alessandro.nadalin@...> wrote: > > Hi guys, > > was just wondering that. > > I personally think it's more HTTP-loving than RESTful; maybe I'm > missing something but I don't see any hypermedia tenet in Couch. > Hi Jan, > > My take on it (but maybe outdated by now). > > http://algermissen.blogspot.com/2010/02/classifying-couchdb-apihtml the correct link: http://algermissen.blogspot.com/2010/02/classifying-couchdb-apihtml You share some thoughts I personally have too: a DB probably doesn't need to be hypermedia-driven, but Couch shows some benefits of REST/HTTP-loving services ( but they probably should not call it RESTful :-P ) > > > Jan > > > > > > -- > Nadalin Alessandro > www.odino.org > www.twitter.com/_odino_ > > > ------------------------------------ > > Yahoo! Groups Links > > > > -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
Alessandro Nadalin wrote: > On Tue, Apr 19, 2011 at 12:35 PM, algermissen1971 >> http://algermissen.blogspot.com/2010/02/classifying-couchdb-apihtml > the correct link: > http://algermissen.blogspot.com/2010/02/classifying-couchdb-apihtml either a . is getting taken out, or this'll work: http://algermissen.blogspot.com/2010/02/classifying-couchdb-api.html
On Tue, Apr 19, 2011 at 6:35 AM, algermissen1971 <algermissen1971@...> wrote: > My take on it (but maybe outdated by now). > Ah, interesting. I've been using it recently and had pretty much the same thoughts esp regarding the non-use of hypermedia (and related, it's lack of hierarchy). It reminded me of a twitter exchange I had with Jim Webber where I criticized Amazon S3 for not using hypermedia, and he disagreed, claiming, IIRC, that it didn't need it since it was targeted at the developer. I didn't respond to Jim, but never felt entirely comfortable with his conclusion. The thing is, the Web SUCKS... not in the bad sense of that word, but in the sense that it tends to *pull* stuff into it due to network effects, including stuff you didn't expect (even stuff you stick an HTTP interface in front of). Case in point; http://couchapp.org/ I haven't used the application model described there, but I expect that without CouchDB using hypermedia, writing apps with it is probably a lot trickier than it would otherwise be. P.S. I wonder if CouchDB would have used hypermedia if JSON natively supported URLs? Mark.
IMO, CouchDB is HTTP-ful, but not REST-ful. SQL is neither HTTP-ful nor REST-ful and that's fine. too. Different tools for different problems|environments|etc. I wonder what a hypermedia database would look/act like. Not just a database w/ links in responses, but one that _automatically_ produces the hypermedia affordances appropriate for each response. In fact, I wonder if this is desirable|possible. Right now my head is in a space where the hypermedia controls are the real "design work" tied to a particular set of use cases - not something that is native to the data being passed about. But that's just me (and where I am today). Not long ago I saw a Ted Nelson video[1] where he talks about Zigzag data[2]. It's interesting, but I don't yet think (from what I've seen there) that Zizag is a hypermedia database. Still, a true hypermedia database is an intriguing idea. [1] http://www.youtube.com/watch?v=WEj9vqVvHPc [2] http://www.xanadu.com/zigzag/ mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Tue, Apr 19, 2011 at 09:49, Mark Baker <distobj@...> wrote: > On Tue, Apr 19, 2011 at 6:35 AM, algermissen1971 > <algermissen1971@...> wrote: >> My take on it (but maybe outdated by now). >> > > Ah, interesting. I've been using it recently and had pretty much the > same thoughts esp regarding the non-use of hypermedia (and related, > it's lack of hierarchy). It reminded me of a twitter exchange I had > with Jim Webber where I criticized Amazon S3 for not using hypermedia, > and he disagreed, claiming, IIRC, that it didn't need it since it was > targeted at the developer. > > I didn't respond to Jim, but never felt entirely comfortable with his > conclusion. The thing is, the Web SUCKS... not in the bad sense of > that word, but in the sense that it tends to *pull* stuff into it due > to network effects, including stuff you didn't expect (even stuff you > stick an HTTP interface in front of). > > Case in point; http://couchapp.org/ > > I haven't used the application model described there, but I expect > that without CouchDB using hypermedia, writing apps with it is > probably a lot trickier than it would otherwise be. > > P.S. I wonder if CouchDB would have used hypermedia if JSON natively > supported URLs? > > Mark. > > > ------------------------------------ > > Yahoo! Groups Links > > > >
mike amundsen wrote: > > I wonder what a hypermedia database would look/act like. Not just a > database w/ links in responses, but one that _automatically_ produces > the hypermedia affordances appropriate for each response. > I've done this using eXist XML DB, coding in XQuery. Or do you mean something more? -Eric
I think there's a range of grey to consider here. Replace semantics yes, but it doesn't mean that the exact body will be used upon issuing a GET. A typical example would be PUTing an entity with an identifier contained in the body. The server is quite in its right to apply the changes but preserve the original identifier if the client cannot update it. Same can be said of hypermedia controls within the entity. Replacing the resource state yes, but not "make this entity the new entity body at that URI", there is a difference, albeit slight. -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Julian Reschke Sent: 14 April 2011 14:53 To: Mike Kelly Cc: algermissen1971; Bryan Taylor; mike amundsen; rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] PUT for partial update of an existing resource On 13.04.2011 12:02, Mike Kelly wrote: > On Wed, Apr 13, 2011 at 7:19 AM, algermissen1971 > <algermissen1971@... <mailto:algermissen1971%40mac.com>> wrote: > > > > > > On 13 Apr, 2011,at 06:53 AM, Bryan Taylor <bryan_w_taylor@... > <mailto:bryan_w_taylor%40yahoo.com>> wrote: > > > > > > This seems like a reasonable approach once you accept that you want > to allow > PATCH. In my development organization, we've defined our > uniform interface > to be > the HTTP methods of RFC 2616, > > > > Out of curiosity: why? > > > > Keeping down barriers to adoption for clients? Out of curiosity: which clients? > > so PATCH isn't an option for me. > > > > > > Then use POST. > > > > Both of those are options (PATCH or POST) are non-idempotent, and > therefore not equivalent. Add If-Match. > There's very little infrastructure that relies on this "full PUT" > requirement. In contrast, there are real world applications in the > wild that implement partial PUT, and they seem to work ok. > > What would be the issue(s) to respecify PUT as simply non-safe, > idempotent requests that aren't DELETE? Not sure I understand. Either you care about what the spec says, and then you shouldn't use PUT for partial updates (yes, according to 2616). Or you don't, in which case you can do whatever will work in practice, which depends on clients, servers, and intermediaries. If you control them, or they work for you the way you expect them to do, great. Best regards, Julian ------------------------------------ Yahoo! Groups Links
On 20.04.2011 07:49, Sebastien Lambla wrote: > I think there's a range of grey to consider here. Replace semantics yes, but it doesn't mean that the exact body will be used upon issuing a GET. > > A typical example would be PUTing an entity with an identifier contained in the body. The server is quite in its right to apply the changes but preserve the original identifier if the client cannot update it. Same can be said of hypermedia controls within the entity. > > Replacing the resource state yes, but not "make this entity the new entity body at that URI", there is a difference, albeit slight. Yes. We've got many examples for that: atom:id, XML-based stores, WebDAV stores backed by calendaring/address servers...
On 20 Apr, 2011,at 08:02 AM, Julian Reschke <julian.reschke@...> wrote: > On 20.04.2011 07:49, Sebastien Lambla wrote: > > I think there's a range of grey to consider here. Replace semantics yes, but it doesn't mean that the exact body will be used upon issuing a GET
Given that HTTP PATCH[1] has been a released IETF standard for over a year now, and it specifically addresses the partial update use case, why are we still wasting both bandwidth and intellectual bits talking about using PUT for partial updates? Craig [1] http://tools.ietf.org/html/rfc5789 On Tue, Apr 19, 2011 at 11:16 PM, algermissen1971 <algermissen1971@...>wrote: > > > On 20 Apr, 2011,at 08:02 AM, Julian Reschke <julian.reschke@...> wrote: > > On 20.04.2011 07:49, Sebastien Lambla wrote: >> > I think there's a range of grey to consider here. Replace semantics yes, >> but it doesn't mean that the exact body will be used upon issuing a GET >> >
On 20 Apr, 2011,at 09:06 AM, Craig McClanahan <craigmcc@...> wrote: > > > Given that HTTP PATCH[1] has been a released IETF standard for over a year now, and it specifically addresses the partial update use case, why are we still wasting both bandwidth and intellectual bits talking about using PUT for partial updates? Because people where asking for idempotent partial updates (which IMHO are conceptually impossible). Jan > > Craig > > [1] http://tools.ietf.org/html/rfc5789 > > On Tue, Apr 19, 2011 at 11:16 PM, algermissen1971 <algermissen1971@...> wrote: > > > > > On 20 Apr, 2011,at 08:02 AM, Julian Reschke <julian.reschke@...> wrote: > > On 20.04.2011 07:49, Sebastien Lambla wrote: > > I think there's a range of grey to consider here. Replace semantics yes, but it doesn't mean that the exact body will be used upon issuing a GET > > > > >
On Tue, Apr 19, 2011 at 4:53 PM, mike amundsen <mamund@...> wrote: > IMO, CouchDB is HTTP-ful, but not REST-ful. SQL is neither HTTP-ful > nor REST-ful and that's fine. too. Different tools for different > problems|environments|etc. > > I wonder what a hypermedia database would look/act like. Not just a > database w/ links in responses, but one that _automatically_ produces > the hypermedia affordances appropriate for each response. In fact, I > wonder if this is desirable|possible. Right now my head is in a space > where the hypermedia controls are the real "design work" tied to a > particular set of use cases - not something that is native to the data > being passed about. But that's just me (and where I am today). Hi Mike, I do really agree with you: probably, if we find a DB with this capabilities, we'll still need to write a lot of declarative configuration to make it work it your DAP. > > Not long ago I saw a Ted Nelson video[1] where he talks about Zigzag > data[2]. It's interesting, but I don't yet think (from what I've seen > there) that Zizag is a hypermedia database. > > Still, a true hypermedia database is an intriguing idea. > > [1] http://www.youtube.com/watch?v=WEj9vqVvHPc > [2] http://www.xanadu.com/zigzag/ > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2010 > http://rest-fest.googlecode.com > > > > > On Tue, Apr 19, 2011 at 09:49, Mark Baker <distobj@...> wrote: >> On Tue, Apr 19, 2011 at 6:35 AM, algermissen1971 >> <algermissen1971@...> wrote: >>> My take on it (but maybe outdated by now). >>> >> >> Ah, interesting. I've been using it recently and had pretty much the >> same thoughts esp regarding the non-use of hypermedia (and related, >> it's lack of hierarchy). It reminded me of a twitter exchange I had >> with Jim Webber where I criticized Amazon S3 for not using hypermedia, >> and he disagreed, claiming, IIRC, that it didn't need it since it was >> targeted at the developer. >> >> I didn't respond to Jim, but never felt entirely comfortable with his >> conclusion. The thing is, the Web SUCKS... not in the bad sense of >> that word, but in the sense that it tends to *pull* stuff into it due >> to network effects, including stuff you didn't expect (even stuff you >> stick an HTTP interface in front of). >> >> Case in point; http://couchapp.org/ >> >> I haven't used the application model described there, but I expect >> that without CouchDB using hypermedia, writing apps with it is >> probably a lot trickier than it would otherwise be. >> >> P.S. I wonder if CouchDB would have used hypermedia if JSON natively >> supported URLs? >> >> Mark. >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
Hi guys, I was wondering if a DAP is, simply, expliciting the workflow a consumer might follow during service's consumption. I mean, something like slide #5 and #6 of Ian Robinson's slides: http://dl.dropbox.com/u/2877247/Domain%20Application%20Protocols%20-%20JFokus.pdf -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
On Apr 20, 2011, at 3:15 PM, Alessandro Nadalin wrote: > Hi guys, > > I was wondering if a DAP is, simply, expliciting the workflow a > consumer might follow during service's consumption. Personally, I have never liked the term (sorry Ian :-) because it (at least to me) sounds as if there was an agreement between client and server regarding the application that they engage in. IOW, the application flow *might* follow some pre-supposed state machine, but it also *might not*. Server and client simply cannot agree on this at design time - or it's not REST. However, - and this is what Ian is IMHO trying to capture with his term 'DAP' - media types do not fall from the sky at random but are designed on the basis of some set of interactions the designers think about when they create the media type. Clearly, when you design a media type for procurement you will have some basic interactions in mind. Jim Webber, IIRC, has called this 'canonical application' which I personally think makes it more explicit that there are also many, many other applications that can be created on the basis of a given media type. I have also thought of 'canonical use case'. Does that make sense? Jan > I mean, something like slide #5 and #6 of Ian Robinson's slides: > http://dl.dropbox.com/u/2877247/Domain%20Application%20Protocols%20-%20JFokus.pdf > > -- > Nadalin Alessandro > www.odino.org > www.twitter.com/_odino_ > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Wed, Apr 20, 2011 at 7:36 PM, Jan Algermissen <algermissen1971@...> wrote: > > On Apr 20, 2011, at 3:15 PM, Alessandro Nadalin wrote: > >> Hi guys, >> >> I was wondering if a DAP is, simply, expliciting the workflow a >> consumer might follow during service's consumption. > > Personally, I have never liked the term (sorry Ian :-) because it (at least to me) sounds as if there was an agreement between client and server regarding the application that they engage in. IOW, the application flow *might* follow some pre-supposed state machine, but it also *might not*. Server and client simply cannot agree on this at design time - or it's not REST. > Which REST constraint is broken by pre-agreeing on application flow at design time? Doesn't that imply that AtomPub is not RESTful? I'm not sure about that. Parallels between applications for human-directed clients and machine-directed clients break down here. The reason that html apps work with very little pre-supposition is because humans are capable of levels of awareness, adaptation and intuition that are (currently) far beyond anything a machine can manage. I think it's too easy to underestimate how much that human capability affords change in application flow, and to over-estimate the capability of machines. Agreeing on application flow at design time, if the clients are machine-directed, is actually a Good Idea because the server will be under no illusion as to what changes it can enact, and it avoids the costs associated with trying to define application flows in more 'dynamic' terms. > However, - and this is what Ian is IMHO trying to capture with his term 'DAP' - media types do not fall from the sky at random but are designed on the basis of some set of interactions the designers think about when they create the media type. Clearly, when you design a media type for procurement you will have some basic interactions in mind. > Actually you can design media types that are completely application agnostic, and when using them define your DAPs purely in terms of entry points and link relations. Providing a more generic interface from an app agnostic media type seems like it might encourage healthier re-use in the system. Cheers, Mike
On Apr 20, 2011, at 9:24 PM, Mike Kelly wrote: > On Wed, Apr 20, 2011 at 7:36 PM, Jan Algermissen > <algermissen1971@...> wrote: >> >> On Apr 20, 2011, at 3:15 PM, Alessandro Nadalin wrote: >> >>> Hi guys, >>> >>> I was wondering if a DAP is, simply, expliciting the workflow a >>> consumer might follow during service's consumption. >> >> Personally, I have never liked the term (sorry Ian :-) because it (at least to me) sounds as if there was an agreement between client and server regarding the application that they engage in. IOW, the application flow *might* follow some pre-supposed state machine, but it also *might not*. Server and client simply cannot agree on this at design time - or it's not REST. >> > > Which REST constraint is broken by pre-agreeing on application flow at > design time? The hypermedia constraint. > Doesn't that imply that AtomPub is not RESTful? I'm not > sure about that. In some aspects AtomPub is not RESTful. For example, it defines that GET requests on collections return Atom feed documents. Design time knowledge like this is breaking the hypermedia constraint. The fix for this would probably be as easy as making explicit that other possibilities exist and that client should be prepared to deal with that. OTH, you could argue that this is implicit through the use of HTTP anyway. > > Parallels between applications for human-directed clients and > machine-directed clients break down here. The reason that html apps > work with very little pre-supposition is because humans are capable of > levels of awareness, adaptation and intuition that are (currently) far > beyond anything a machine can manage. I think it's too easy to > underestimate how much that human capability affords change in > application flow, and to over-estimate the capability of machines. I have come to think that it is misleading to make that distinction at all. User agents are software components that are programmed to work on behalf of some user's use case. In some use cases the user will perform many user action steps in that use case (e.g. ordering a book on Amazon) in other use cases the user will merely start the process and review the results (e.g starting a user agent that compares book prices at various sites an stores the result in a database). In any case the user agent simply does what it has been programmed to do and in any case the application that realizes the use case will end up in an 'error' state when the user's intent cannot be fulfilled. What REST brings to the table is that the 'error' state is a state we must expect and deal with. REST makes that possibility of failure explicit by acknowledging the very nature of networked and decentralized systems. (Instead of hiding it behind the false promise of control implied by other architectural styles) (Sorry, got carried away there :-) > > Agreeing on application flow at design time, if the clients are > machine-directed, is actually a Good Idea because the server will be > under no illusion as to what changes it can enact, and it avoids the > costs associated with trying to define application flows in more > 'dynamic' terms. Hmm, maybe. But it is simply not REST then. > >> However, - and this is what Ian is IMHO trying to capture with his term 'DAP' - media types do not fall from the sky at random but are designed on the basis of some set of interactions the designers think about when they create the media type. Clearly, when you design a media type for procurement you will have some basic interactions in mind. >> > > Actually you can design media types that are completely application > agnostic, and when using them define your DAPs purely in terms of > entry points and link relations. Providing a more generic interface > from an app agnostic media type seems like it might encourage > healthier re-use in the system. This I doubt (I think it just moves the problem elsewhere). But I am open to be convinced otherwise. Jan > > Cheers, > Mike
On Wed, Apr 20, 2011 at 1:32 PM, Jan Algermissen <algermissen1971@...>wrote: > > > > > Agreeing on application flow at design time, if the clients are > > machine-directed, is actually a Good Idea because the server will be > > under no illusion as to what changes it can enact, and it avoids the > > costs associated with trying to define application flows in more > > 'dynamic' terms. > > Hmm, maybe. But it is simply not REST then. > The quality of the client does not affect the RESTyness of an application. You can have a full boat REST application and stupid clients that choose (for whatever reason) to ignore it and tromp on their way doing whatever they do. Whether it's a hard coded client shoving requests at the server, or simply a user that simply can not navigate the interface (doesn't see the links, doesn't understand them, whatever, wasn't trained to click the X button, was trained to click the Y button -- and the Y button is no longer there). Motivation to create a more flexible and adaptable client is tied to the difficulty of maintaining that client, the consequences of failure, and the velocity of change on the service that client is using. If someone writes a perl script against an interface that's been stable for 5 years, for something that can withstand failure IF the service changes, that has no effect on the capabilities and robustness of the service itself. Clients can simply choose how they document their procedures against the service. How many tutorials and such have you seen that are little more than a bunch of screenshots with blanks to fill in an buttons to press circled in red, yet with effectively NO explanation as to WHY the blanks are being filled, where that information comes from, what that information is for, etc. And when the interface changes, when a new step is added to the wizard, the tutorial completely fails, along with the poor new user trying to follow it. That's a client error (in this case the tutorial documentation and the ignorance of the user), not the applications weakness. So, write clients as you like. It's the REST applications promise to provide the proper information for a client to make better decisions, but it can't force the clients to use that information appropriately, rather all it can do it try and protect itself when they don't. Regards, Will Hartung (willh@...)
On Wed, Apr 20, 2011 at 9:32 PM, Jan Algermissen <algermissen1971@...> wrote: > > On Apr 20, 2011, at 9:24 PM, Mike Kelly wrote: > >> On Wed, Apr 20, 2011 at 7:36 PM, Jan Algermissen >> <algermissen1971@...> wrote: >>> >>> On Apr 20, 2011, at 3:15 PM, Alessandro Nadalin wrote: >>> >>>> Hi guys, >>>> >>>> I was wondering if a DAP is, simply, expliciting the workflow a >>>> consumer might follow during service's consumption. >>> >>> Personally, I have never liked the term (sorry Ian :-) because it (at least to me) sounds as if there was an agreement between client and server regarding the application that they engage in. IOW, the application flow *might* follow some pre-supposed state machine, but it also *might not*. Server and client simply cannot agree on this at design time - or it's not REST. >>> >> >> Which REST constraint is broken by pre-agreeing on application flow at >> design time? > > The hypermedia constraint. How? >> Doesn't that imply that AtomPub is not RESTful? I'm not >> sure about that. > > In some aspects AtomPub is not RESTful. For example, it defines that GET requests on collections return Atom feed documents. Design time knowledge like this is breaking the hypermedia constraint. > Sure, I meant in general. Presumably the protocol itself is all design time knowledge. Are you saying that the AtomPub protocol is not RESTful? > >> >> Parallels between applications for human-directed clients and >> machine-directed clients break down here. The reason that html apps >> work with very little pre-supposition is because humans are capable of >> levels of awareness, adaptation and intuition that are (currently) far >> beyond anything a machine can manage. I think it's too easy to >> underestimate how much that human capability affords change in >> application flow, and to over-estimate the capability of machines. > > I have come to think that it is misleading to make that distinction at all. User agents are software components that are programmed to work on behalf of some user's use case. In some use cases the user will perform many user action steps in that use case (e.g. ordering a book on Amazon) in other use cases the user will merely start the process and review the results (e.g starting a user agent that compares book prices at various sites an stores the result in a database). If that was really the case then why are we talking about 'design time'? Where's the distinction? >> >>> However, - and this is what Ian is IMHO trying to capture with his term 'DAP' - media types do not fall from the sky at random but are designed on the basis of some set of interactions the designers think about when they create the media type. Clearly, when you design a media type for procurement you will have some basic interactions in mind. >>> >> >> Actually you can design media types that are completely application >> agnostic, and when using them define your DAPs purely in terms of >> entry points and link relations. Providing a more generic interface >> from an app agnostic media type seems like it might encourage >> healthier re-use in the system. > > This I doubt (I think it just moves the problem elsewhere). But I am open to be convinced otherwise. > I have a generic media type that I'm confident could be used to express an equivalent of AtomPub purely in terms of rels and entry points. Be interested to understand where your skepticism lies, maybe better to discuss that on mike's hypermedia mailing list? Cheers, Mike
Will Hartung wrote: > On Wed, Apr 20, 2011 at 1:32 PM, Jan Algermissen <algermissen1971@...>wrote: > >>> Agreeing on application flow at design time, if the clients are >>> machine-directed, is actually a Good Idea because the server will be >>> under no illusion as to what changes it can enact, and it avoids the >>> costs associated with trying to define application flows in more >>> 'dynamic' terms. >> Hmm, maybe. But it is simply not REST then. >> > > The quality of the client does not affect the RESTyness of an application. A RESTful server side application? why I've never seen such a thing!
On Apr 21, 2011, at 3:26 PM, Nathan wrote: > > A RESTful server side application? why I've never seen such a thing! Keep in mind that client and server are part of 'the application'. Thinking of 'a server side application' is a REST-unrelated use of the term 'application'. Jan
--- In rest-discuss@yahoogroups.com, Craig McClanahan <craigmcc@...> wrote: > > Given that HTTP PATCH[1] has been a released IETF standard for over a year > now, and it specifically addresses the partial update use case, why are we > still wasting both bandwidth and intellectual bits talking about using PUT > for partial updates? > > Craig My question that started the thread was whether or not it is OK to use PUT for partial updates. The fact that another verb can be used for partial updates is irrelevent. That was already true with POST anyway. When you ask if something is legal per the spec, you don't get to add additional constraints on the design of solutions by requiring them to solve the problem a way you like. A good spec should unambiguously define what is and isn't compliant. If somebody comes along and implements a design that uses these verbs in a way nobody thought of, the only question is whether you can find black letters in the text of the spec that you can circle in a red pen to defend the assertion that "you didn't comply with this". Lore is that PUT cannot be a partial update, but no one has answered my challenge to point to the text of the spec to support this lore. Nothing in RFC 2616 backs up the contention that the server must forget all previous state, in fact to the contrary, the spec says "HTTP/1.1 does not define how a PUT method affects the state of an origin server". It's required to be idempotent and to support the vague notion that "the enclosed entity be stored under the supplied Request-URI." To my reading, partial updates can satisfy all the constraints that are actually documented in the spec. It very well may be that the community that produced HTTP 1.1 did not have partial updates in mind when they wrote the spec. This is irrelevant to the question of whether the text of the spec forbids it or not. As to why not use PATCH, I don't object if others don't want use it. I choose not to for a variety of reasons, which can be summarized as it's too exotic for me to expect of my clients. A client which supports only a subset methods explicitly given by RFC 2616 will not understand PATCH, and I believe this is true of most of my clients. I also think I can solve the problem without it, by using POST, using PUT with refinements, or using PUT with partial updates.
Hi guys, I'm testing some browser's behaviour with preconditions. I'm GETting /foo, which is a webpage in text/html with an Etag (1234) in the HTTP headers, with a form which has an action pointing to the same URL. So I would expect the browser to send the If-Match header when I submit the form, but that doesn't happens ( I can't use PUT for obvious reasons, hope HTML5 will integrate it [ thanks mike ] ). Is this the expected behavior? How can I fix the "lost update" problem, in a web application, at the protocol level? -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
This is expected behavior for common Web browsers; they do not return the ETag on POST. I think the W3C Amaya browser is an exception to this rule, but it's been a while since I worked with that one. I've used two ways to resolve this "missing ETag" problem when sending data to servers. The easiest approach is to include the ETag as an argument on the URI supplied in the POST form and have the server use that argument to do the concurrency check: <form method="post" action="/resource?etag=q1w2e3r4t5"> ... </form> An alternate POST FORM solution could be to use a hidden field in the FORM to hold the concurrency value: <form method="post" action="/resource"> ...<input type="hidden" name="etag" value="q1w2e3r4t5" /> </form> Another option is to abandon the Web browser's POST form pattern and use XmlHttpRequest via Javascript to capture the Etag and execute the method directly. By using Javascript you can also use the PUT or PATCH methods if that is more appropriate (semantically) for the operatoin you wish to perform. I use the first option when I am in an environment where javascript is not allowed or undesirable. I use the second option when "Ajax" style interactions with the server are expected. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Sun, Apr 24, 2011 at 15:55, Alessandro Nadalin <alessandro.nadalin@...> wrote: > Hi guys, > > I'm testing some browser's behaviour with preconditions. > > I'm GETting /foo, which is a webpage in text/html with an Etag (1234) > in the HTTP headers, with a form which has an action pointing to the > same URL. > So I would expect the browser to send the If-Match header when I > submit the form, but that doesn't happens ( I can't use PUT for > obvious reasons, hope HTML5 will integrate it [ thanks mike ] ). > > Is this the expected behavior? How can I fix the "lost update" > problem, in a web application, at the protocol level? > > > -- > Nadalin Alessandro > www.odino.org > www.twitter.com/_odino_ > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Sun, Apr 24, 2011 at 10:20 PM, mike amundsen <mamund@...> wrote: > This is expected behavior for common Web browsers; they do not return > the ETag on POST. I think the W3C Amaya browser is an exception to > this rule, but it's been a while since I worked with that one. Do you think they have any reason for doing so? Sounds like a lack of functionality at the protocol level, to me. > > I've used two ways to resolve this "missing ETag" problem when sending > data to servers. > > The easiest approach is to include the ETag as an argument on the URI > supplied in the POST form and have the server use that argument to do > the concurrency check: > <form method="post" action="/resource?etag=q1w2e3r4t5"> > ... > </form> > > An alternate POST FORM solution could be to use a hidden field in the > FORM to hold the concurrency value: > <form method="post" action="/resource"> > ...<input type="hidden" name="etag" value="q1w2e3r4t5" /> > </form> > > Another option is to abandon the Web browser's POST form pattern and > use XmlHttpRequest via Javascript to capture the Etag and execute the > method directly. By using Javascript you can also use the PUT or > PATCH methods if that is more appropriate (semantically) for the > operatoin you wish to perform. > > I use the first option when I am in an environment where javascript is > not allowed or undesirable. I use the second option when "Ajax" style > interactions with the server are expected. > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2010 > http://rest-fest.googlecode.com > > > > > On Sun, Apr 24, 2011 at 15:55, Alessandro Nadalin > <alessandro.nadalin@...> wrote: >> Hi guys, >> >> I'm testing some browser's behaviour with preconditions. >> >> I'm GETting /foo, which is a webpage in text/html with an Etag (1234) >> in the HTTP headers, with a form which has an action pointing to the >> same URL. >> So I would expect the browser to send the If-Match header when I >> submit the form, but that doesn't happens ( I can't use PUT for >> obvious reasons, hope HTML5 will integrate it [ thanks mike ] ). >> >> Is this the expected behavior? How can I fix the "lost update" >> problem, in a web application, at the protocol level? >> >> >> -- >> Nadalin Alessandro >> www.odino.org >> www.twitter.com/_odino_ >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
Bryan it strikes me that the very part of the specification you're referring to as the ambiguity is the root of the clarity with regard to whether the PUT verb can accommodate a subset of the information associated with the representation of the an existing addressable resource: 'The PUT method requests that the enclosed entity be stored under the supplied Request-URI. If the Request-URI refers to an already existing resource, the enclosed entity SHOULD be considered as a modified version of the one residing on the origin server.' (Section 9.6, RF2616) '...the URI in a PUT request identifies the entity enclosed with the request' (Section 9.6, RF2616) I'll explain why I see it this way. Firstly the reference to 'enclosed entity' is key in my opinion. From a data modelling perspective entity refers to something atomic comprising one or more attributes. Whilst this is far from a strong argument in the 2616 context - falling back to a general survey of definitions there is a general trend towards this definition of entity as: 'An entity is something that exists separately from other things and has a clear identity of its own.' (example from http://en.wikipedia.org/wiki/Entity) So the reference to an enclosed-entity within the PUT verb semantics leads me to a strong belief that PUT should convey an atomic representation which will then be regarded by the server as the 'replacement' for any prior representations. This alone means to me that partial PUT, or PUT conveying an arbitrary subset of attributes (whoever much I may wish that to be embraced by 2616) is not compliant with the albeit terse, '2616 specification'. In reality I too have a strong tendency to look to optimise what may be a heavy overhead in conveying large amounts of information for what may be a single attribute modification in my resource representation between my clients and server. That said I regard my own willingness to optimise and add more intelligence to my own implementation of the PUT (modify) interaction as my own overlay or deviation from the base specification in RFC2616. In such cases, the merits of adopting this approach have to consider the likelihood of breaking assumptions between my clients and intermediaries about what is being conveyed by PUT. The most interesting clause, the one referring to: 'HTTP/1.1 does not define how a PUT method affects the state of an origin server.' (Section 9.6, RFC2616) In my opinion this clause does mean that so long as you ensure that the PUT is compliant to the specification (i.e. atomic replacement), then the server is open to doing funky stuff with previous representation versions, or even deriving the extent of the delta's between versions of the resource. However that still does not open the door to conveying partial resource representations between the client and the server and I don't believe conveying a partial does actually satisfy these constraints as per your statement about satisfying all constraints. --- In rest-discuss@yahoogroups.com, "bryan_w_taylor" <bryan_w_taylor@...> wrote: > > > > > > > > > > > > --- In rest-discuss@yahoogroups.com, Craig McClanahan <craigmcc@> wrote: > > > > Given that HTTP PATCH[1] has been a released IETF standard for over a year > > now, and it specifically addresses the partial update use case, why are we > > still wasting both bandwidth and intellectual bits talking about using PUT > > for partial updates? > > > > Craig > > My question that started the thread was whether or not it is OK to use PUT for partial updates. The fact that another verb can be used for partial updates is irrelevent. That was already true with POST anyway. When you ask if something is legal per the spec, you don't get to add additional constraints on the design of solutions by requiring them to solve the problem a way you like. A good spec should unambiguously define what is and isn't compliant. If somebody comes along and implements a design that uses these verbs in a way nobody thought of, the only question is whether you can find black letters in the text of the spec that you can circle in a red pen to defend the assertion that "you didn't comply with this". > > Lore is that PUT cannot be a partial update, but no one has answered my challenge to point to the text of the spec to support this lore. Nothing in RFC 2616 backs up the contention that the server must forget all previous state, in fact to the contrary, the spec says "HTTP/1.1 does not define how a PUT method affects the state of an origin server". It's required to be idempotent and to support the vague notion that "the enclosed entity be stored under the supplied Request-URI." To my reading, partial updates can satisfy all the constraints that are actually documented in the spec. > > It very well may be that the community that produced HTTP 1.1 did not have partial updates in mind when they wrote the spec. This is irrelevant to the question of whether the text of the spec forbids it or not. > > As to why not use PATCH, I don't object if others don't want use it. I choose not to for a variety of reasons, which can be summarized as it's too exotic for me to expect of my clients. A client which supports only a subset methods explicitly given by RFC 2616 will not understand PATCH, and I believe this is true of most of my clients. I also think I can solve the problem without it, by using POST, using PUT with refinements, or using PUT with partial updates. >
On Mon, 2011-04-25 at 10:28 +0200, Alessandro Nadalin wrote: > > On Sun, Apr 24, 2011 at 10:20 PM, mike amundsen <mamund@...> > wrote: > > This is expected behavior for common Web browsers; they do not > return > > the ETag on POST. I think the W3C Amaya browser is an exception to > > this rule, but it's been a while since I worked with that one. > > Do you think they have any reason for doing so? > > Sounds like a lack of functionality at the protocol level, to me. I see as lack of expressiveness in HTML forms, ie at the media type level. After all you might well want control over whether the etag is sent, and for POST etags often dont matter as the semantics could be anything so adding an if-match might not be desirable. For PUT it matters much more, but HTML forms cannot create the types of body that most PUTs need, which I think is why the PUT/DELETE support was dropped from HTML5. Justin
On Tue, Apr 26, 2011 at 3:45 PM, Justin Cormack < justin@specialbusservice.com> wrote: > > > On Mon, 2011-04-25 at 10:28 +0200, Alessandro Nadalin wrote: > > > > On Sun, Apr 24, 2011 at 10:20 PM, mike amundsen <mamund@yahoo.com> > > wrote: > > > This is expected behavior for common Web browsers; they do not > > return > > > the ETag on POST. I think the W3C Amaya browser is an exception to > > > this rule, but it's been a while since I worked with that one. > > > > Do you think they have any reason for doing so? > > > > Sounds like a lack of functionality at the protocol level, to me. > > I see as lack of expressiveness in HTML forms, ie at the media type > level. After all you might well want control over whether the etag is > sent, and for POST etags often dont matter as the semantics could be > anything so adding an if-match might not be desirable. For PUT it > matters much more, but HTML forms cannot create the types of body that > Hi Justin, that's why I said that I should use POST, damn forms :) > most PUTs need, which I think is why the PUT/DELETE support was dropped > from HTML5. > luckily, Mike was able to make the working group reconsider their decision: http://www.w3.org/Bugs/Public/show_bug.cgi?id=10671 > > Justin > > > -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
Hi, I'd like to return a single URL in an HTTP response. Something like: ----------------- 200 OK http://example.com/12345 ----------------- I'm wondering about the media type to use here. I've seen some use of "text/url" but it isn't actually registered in the IANA MIME media types registry. Using text/plain? Using another representation for the URL? Basically, I'm curious about how other people on the group would approach this (and why). Philippe
You could use a link header (see http://tools.ietf.org/html/draft-nottingham-http-link-header-06). I believe it already is an accepted standard.
Or use HTML and make it self-describing:
<html>
<body>
<p>
<a href="http://xxx" id="the-link">This is the link you are looking for</a>
</p>
</body>
</html>
Or combine both :-)
/Jørn
--- In rest-discuss@yahoogroups.com, "Philippe Mougin" <pmougin@...> wrote:
>
> Hi,
>
> I'd like to return a single URL in an HTTP response. Something like:
>
> -----------------
> 200 OK
>
> http://example.com/12345
> -----------------
>
> I'm wondering about the media type to use here.
>
> I've seen some use of "text/url" but it isn't actually registered in the IANA MIME media types registry.
>
> Using text/plain?
> Using another representation for the URL?
>
> Basically, I'm curious about how other people on the group would approach this (and why).
>
> Philippe
>
On Apr 29, 2011, at 11:05 AM, Philippe Mougin wrote: > Hi, > > I'd like to return a single URL in an HTTP response. Something like: > > ----------------- > 200 OK > > http://example.com/12345 > ----------------- > > I'm wondering about the media type to use here. text/uri-list is what I use for that Jan > > I've seen some use of "text/url" but it isn't actually registered in the IANA MIME media types registry. > > Using text/plain? > Using another representation for the URL? > > Basically, I'm curious about how other people on the group would approach this (and why). > > Philippe > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Hi,
I am trying to write up a section of my thesis on SA-REST. As far as I
can see SA-REST has similar goals to hRESTS i.e. markup HTML Web API
descriptions with semantic metadata. I have a few q's that I am looking for
help with:
1. Where does SA-REST fit in? The service model [1] looks very similar to the
service model of hRESTS [2]. According to the diagram at the start of [2],
SA-REST sits on top of hRESTS - why then does SA-REST define a similar service
model?
2. According to [2], SA-REST supports faceted search. Where are "p-lang-binding"
and "data-format" [2] coming from? Why is there no mention of them in the W3C
Submission ? [3].
3. In its service model [1], there is no lifting/lowering. How is this done
when using SA-REST?
4. Are there tools such as SWEET [4] for annotating HTML with SA-REST?
5. I am struggling to find a nice simple HTML example (not in RDFa), such as the
hotel example [2].
Thanks,
Sean.
[1] SA-REST service model http://www.w3.org/Submission/SA-REST/#sec_7
[2] hRESTS: an HTML Microformat for Describing RWS
http://knoesis.wright.edu/research/srl/projects/hRESTs/#restfulExample
[3] SA-REST service model http://www.w3.org/Submission/SA-REST/
[4] SWEET http://coconut.tie.nl:8080/dashboard/#1304592891993Hello Sean, It has been some time since I've looked at SA-REST and hRESTs, but I quickly viewed the links you provided and on first impressions I thought that SA-REST can be viewed as a vocabulary to semantically describe a RESTful Service. While hRESTs is a way to embed that vocabulary in an HTML file, however this is not entirely correct. There is as you pointed out an overlapping between the two and neither of them is clearly or completely defined. I think the overlapping is because SA-REST came slightly before hRESTs and now they are saying that SA-REST can be viewed as an extension to hRESTs that offers things like p-lang-binding and data-format which strangely enough isn't mentioned in their W3C submission, so maybe it's something they dropped or added later, I guess dropped according to dates but I'm not sure. About SA-RESTs lifting/lowering they mention it in a paper (older than the W3C submission) this is the paper SA-REST: Semantically Interoperable and Easier- to-Use Services and Mashups http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4376235 and in it there is a simple example of SA-REST (I suspect an old one). (if u can't access the paper I can send the example to u) So it seems SA-REST is not established yet, they keep on changing it without stating how it changed. I hope this is helpful, Regards, Areeb --- In rest-discuss@yahoogroups.com, Sean Kennedy <seandkennedy@...> wrote: > > Hi, > I am trying to write up a section of my thesis on SA-REST. As far as I > can see SA-REST has similar goals to hRESTS i.e. markup HTML Web API > descriptions with semantic metadata. I have a few q's that I am looking for > help with: > > 1. Where does SA-REST fit in? The service model [1] looks very similar to the > service model of hRESTS [2]. According to the diagram at the start of [2], > SA-REST sits on top of hRESTS - why then does SA-REST define a similar service > model? > > 2. According to [2], SA-REST supports faceted search. Where are "p-lang-binding" > and "data-format" [2] coming from? Why is there no mention of them in the W3C > Submission ? [3]. > 3. In its service model [1], there is no lifting/lowering. How is this done > when using SA-REST? > 4. Are there tools such as SWEET [4] for annotating HTML with SA-REST? > 5. I am struggling to find a nice simple HTML example (not in RDFa), such as the > hotel example [2]. > > Thanks, > Sean. > > [1] SA-REST service model http://www.w3.org/Submission/SA-REST/#sec_7 > [2] hRESTS: an HTML Microformat for Describing RWS > http://knoesis.wright.edu/research/srl/projects/hRESTs/#restfulExample > [3] SA-REST service model http://www.w3.org/Submission/SA-REST/ > [4] SWEET http://coconut.tie.nl:8080/dashboard/#1304592891993 >
Sean Kennedy wrote: > > I am trying to write up a section of my thesis on SA-REST. > Hopefully, explaining to the REST community what this is, and what it has to do with REST? ;-) What's the overall topic of your thesis? > > I have a few q's that I am looking for help with: > We've bandied-about the issue of whether or not IDLs have any place in REST, many times. Since I don't see the point of an IDL when using the hypertext constraint, I don't see the point of IDL-as-microformat, either. I've also never seen the point of a machine-readable service document as an endpoint user-agents need to consult before taking action -- what results is some other architectural style (where the semantics of the URI mappings vary based on some hash table at an "entry point" URI, instead of remaining RESTfully static). So I can't answer your questions, since I don't know how this "fits in" with REST, either. What hRESTS/SA-REST look like to me, is kludged-in tooling support to more efficiently produce the HTTP APIs most folks *call* REST APIs these days. Remember, I don't judge APIs by whether they're RESTful, only how well they're suited to their purposes, so I'm not scorning any project which may result in better APIs -- my *opinion* is that this approach may even lead some people _to_ REST's hypertext constraint, so it's probably a good thing, just mis-labeled. We've also discussed machine-readability many times; there are those who prefer machine-targeted data types, and those who prefer RDFa. I see RDFa as a superior solution to microformats, for any purpose, and hRESTS is another example of why -- instead of a general-purpose parsing model, each microformat has its own unique parsing model, usually defined as XSLT -- as is the case with hRESTS/SA-REST, which GRDDL-maps its microformat to RDF, begging the question "why not just use RDFa?" Interoperability is a concern; modular XHTML encompasses Xforms, which gives the ability to "describe" more HTTP-method-rich APIs, but those tokens collide with hRESTS -- which really shouldn't use class='label' because that collides with <label>, as well. The reason it's easier to create RDF vocabularies than it is to create markup languages (or even microformats), is the vocabulary author doesn't have to worry about if browsers' javascript forms-modules reserve 'label' as a keyword, etc. In a nutshell, I don't see how using hRESTS/SA-REST would result in the RESTful APIs I've done using Xforms/RDFa; although by solving what I who doesn't use tooling for API development considers a non-problem, I can see how I could've produced functionally-equivalent HTTP APIs in a fraction of the time. Which seems to be the problem with any effort to mass-produce RESTful APIs, what's lost in translation is all the design- for-longevity goodness which distinguishes REST APIs from HTTP APIs. The hRESTS/SA-REST approach intrigues me from an HTTP tinkerer perspective, supporting server-parsed server-configuration-on-the-fly. On one hand, this would philosophically violate separation of concerns; OTOH, long-term maintenance of Web systems based on static files may benefit from having fewer files to edit. > > 1. Where does SA-REST fit in? The service model [1] looks very > similar to the service model of hRESTS [2]. According to the diagram > at the start of [2], SA-REST sits on top of hRESTS - why then does > SA-REST define a similar service model? > Perhaps you should ask this of the authors, since they're the same? > > 2. According to [2], SA-REST supports faceted search. Where are > "p-lang-binding" and "data-format" [2] coming from? Why is there no > mention of them in the W3C Submission ? [3]. > That is a good question, and if it's machine readable, then where are those tokens like PHP and Java defined and how is versioning accounted for? I understand the problem of mashing up services which use different formats, but I don't understand the rationale of solving this by making services searchable by data format (or any other "facet"), let alone what it has to do with REST other than attaching that name to new IBM products which support this feature. Seems like tight coupling to me, hence my confusion on calling it REST... Particularly when their example of a RESTful Web Service is a JSON-RPC endpoint. > > 3. In its service model [1], there is no lifting/lowering. How is > this done when using SA-REST? > What *are* lifting and lowering? -Eric
Hi Eric,
Thanks for the time you took on that detailed response - appreciate it. My
thesis is based on a mapping framework transforming XML Web Services (both SOAP
and POX) to RESTful HTTP format. This framework has two stages: a) a
configuration stage that sets up a mapping file and b) a run-time adapter that
transforms the messages based on the mapping file.
The advantages are that this framework enables the Web architecture (POST
can be replaced with GET in certain situations); the framework helps with
gradual migration from SOAP/POX to RESTful HTTP WS. It has constraints of
course, principally: URI limits for GET/DELETE and SOAP/POX POSTs which map
logically to multiple RESTful URI's are left untouched (i.e. as a POST).
Where the Semantic Web comes into it is in the mapping file that informs the
run-time adapter. The first version had a manual setup where the user matched
the XML WS operations to RESTful HTTP verbs. The second/current version uses the
Semantic Web to automate this process. Currently, I use SAWSDL for the XML WS
side and hRESTS/MicroWSMO on the RESTful WS side. A tool called Core Dashboard
enables me to annotate both services [1]. Both sides point to the "conceptually"
higher semantic ontology layer (WSMO-Lite is the standard I use for this layer).
In my thesis I will have to cover the alternatives and originally SA-REST
appeared to be that. However, the W3C submission is different to earlier
publications. My information is that it is a microformat (hRESTS/MicroWSMO)
versus RDFa (RDFa/SA-REST) option. So if that is the case is my logic below
correct:
1. It would appear that SA-REST is equivalent to (hRESTS + MicroWSMO). It has a
service model similar to hRESTS and using the SA-REST properties (domain-rel,
sem-rel and sem-class) can point to the semantic layer (as MicroWSMO does).
2. Then using RDFa, SA-REST has the ability to be serialised from XHTML as
RDF.
Thanks again,
Sean.
PS A lifting is an XSLT transformation that maps e.g. a SOAP message to the
"conceptually higher" semantic layer (an RDF file); a lowering is the opposite.
[1] Core Dashboard, http://coconut.tie.nl:8080/dashboard/#1304670463179
________________________________
From: Eric J. Bowman <eric@...>
To: Sean Kennedy <seandkennedy@yahoo.co.uk>
Cc: Rest Discussion Group <rest-discuss@yahoogroups.com>
Sent: Fri, 6 May, 2011 3:57:13
Subject: Re: [rest-discuss] SA-REST
Sean Kennedy wrote:
>
> I am trying to write up a section of my thesis on SA-REST.
>
Hopefully, explaining to the REST community what this is, and what it
has to do with REST? ;-) What's the overall topic of your thesis?
>
> I have a few q's that I am looking for help with:
>
We've bandied-about the issue of whether or not IDLs have any place in
REST, many times. Since I don't see the point of an IDL when using the
hypertext constraint, I don't see the point of IDL-as-microformat,
either. I've also never seen the point of a machine-readable service
document as an endpoint user-agents need to consult before taking
action -- what results is some other architectural style (where the
semantics of the URI mappings vary based on some hash table at an
"entry point" URI, instead of remaining RESTfully static).
So I can't answer your questions, since I don't know how this "fits in"
with REST, either. What hRESTS/SA-REST look like to me, is kludged-in
tooling support to more efficiently produce the HTTP APIs most folks
*call* REST APIs these days. Remember, I don't judge APIs by whether
they're RESTful, only how well they're suited to their purposes, so I'm
not scorning any project which may result in better APIs -- my *opinion*
is that this approach may even lead some people _to_ REST's hypertext
constraint, so it's probably a good thing, just mis-labeled.
We've also discussed machine-readability many times; there are those
who prefer machine-targeted data types, and those who prefer RDFa. I
see RDFa as a superior solution to microformats, for any purpose, and
hRESTS is another example of why -- instead of a general-purpose parsing
model, each microformat has its own unique parsing model, usually
defined as XSLT -- as is the case with hRESTS/SA-REST, which GRDDL-maps
its microformat to RDF, begging the question "why not just use RDFa?"
Interoperability is a concern; modular XHTML encompasses Xforms, which
gives the ability to "describe" more HTTP-method-rich APIs, but those
tokens collide with hRESTS -- which really shouldn't use class='label'
because that collides with <label>, as well. The reason it's easier to
create RDF vocabularies than it is to create markup languages (or even
microformats), is the vocabulary author doesn't have to worry about if
browsers' javascript forms-modules reserve 'label' as a keyword, etc.
In a nutshell, I don't see how using hRESTS/SA-REST would result in the
RESTful APIs I've done using Xforms/RDFa; although by solving what I
who doesn't use tooling for API development considers a non-problem, I
can see how I could've produced functionally-equivalent HTTP APIs in a
fraction of the time. Which seems to be the problem with any effort to
mass-produce RESTful APIs, what's lost in translation is all the design-
for-longevity goodness which distinguishes REST APIs from HTTP APIs.
The hRESTS/SA-REST approach intrigues me from an HTTP tinkerer
perspective, supporting server-parsed server-configuration-on-the-fly.
On one hand, this would philosophically violate separation of concerns;
OTOH, long-term maintenance of Web systems based on static files may
benefit from having fewer files to edit.
>
> 1. Where does SA-REST fit in? The service model [1] looks very
> similar to the service model of hRESTS [2]. According to the diagram
> at the start of [2], SA-REST sits on top of hRESTS - why then does
> SA-REST define a similar service model?
>
Perhaps you should ask this of the authors, since they're the same?
>
> 2. According to [2], SA-REST supports faceted search. Where are
> "p-lang-binding" and "data-format" [2] coming from? Why is there no
> mention of them in the W3C Submission ? [3].
>
That is a good question, and if it's machine readable, then where are
those tokens like PHP and Java defined and how is versioning accounted
for? I understand the problem of mashing up services which use
different formats, but I don't understand the rationale of solving this
by making services searchable by data format (or any other "facet"),
let alone what it has to do with REST other than attaching that name to
new IBM products which support this feature. Seems like tight coupling
to me, hence my confusion on calling it REST...
Particularly when their example of a RESTful Web Service is a JSON-RPC
endpoint.
>
> 3. In its service model [1], there is no lifting/lowering. How is
> this done when using SA-REST?
>
What *are* lifting and lowering?
-Eric
In my mind, REST or ROA thinking is an inversion over SOA thinking, and that inversion is at the heart of people's difficulty with "getting" REST. So I've been trying to find ways to make that flip easier, or more understandable. Here's my latest blog post which aims to do that: http://duncan-cragg.org/blog/post/mature-rest-easy/ <http://duncan-cragg.org/blog/post/mature-rest-easy/> Cheers! Duncan Cragg
Hi Eric,
Thanks for the time you took on that detailed response - appreciate it. My
thesis is based on a mapping framework transforming XML Web Services (both SOAP
and POX) to RESTful HTTP format. This framework has two stages: a) a
configuration stage that sets up a mapping file and b) a run-time adapter that
transforms the messages based on the mapping file.
The advantages are that this framework enables the Web architecture (POST
can be replaced with GET in certain situations); the framework helps with
gradual migration from SOAP/POX to RESTful HTTP WS. It has constraints of
course, principally: URI limits for GET/DELETE and SOAP/POX POSTs which map
logically to multiple RESTful URI's are left untouched (i.e. as a POST).
Where the Semantic Web comes into it is in the mapping file that informs the
run-time adapter. The first version had a manual setup where the user matched
the XML WS operations to RESTful HTTP verbs. The second/current version uses the
Semantic Web to automate this process. Currently, I use SAWSDL for the XML WS
side and hRESTS/MicroWSMO on the RESTful WS side. A tool called Core Dashboard
enables me to annotate both services [1]. Both sides point to the "conceptually"
higher semantic ontology layer (WSMO-Lite is the standard I use for this layer).
In my thesis I will have to cover the alternatives and originally SA-REST
appeared to be that. However, the W3C submission is different to earlier
publications. My information is that it is a microformat (hRESTS/MicroWSMO)
versus RDFa (RDFa/SA-REST) option. So if that is the case is my logic below
correct:
1. It would appear that SA-REST is equivalent to (hRESTS + MicroWSMO). It has a
service model similar to hRESTS and using the SA-REST properties (domain-rel,
sem-rel and sem-class) can point to the semantic layer (as MicroWSMO does).
2. Then using RDFa, SA-REST has the ability to be serialised from XHTML as
RDF.
Thanks again,
Sean.
PS A lifting is an XSLT transformation that maps e.g. a SOAP message to the
"conceptually higher" semantic layer (an RDF file); a lowering is the opposite.
[1] Core Dashboard, http://coconut.tie.nl:8080/dashboard/#1304670463179
________________________________
From: Eric J. Bowman <eric@...>
To: Sean Kennedy <seandkennedy@yahoo.co.uk>
Cc: Rest Discussion Group <rest-discuss@yahoogroups.com>
Sent: Fri, 6 May, 2011 3:57:13
Subject: Re: [rest-discuss] SA-REST
Sean Kennedy wrote:
>
> I am trying to write up a section of my thesis on SA-REST.
>
Hopefully, explaining to the REST community what this is, and what it
has to do with REST? ;-) What's the overall topic of your thesis?
>
> I have a few q's that I am looking for help with:
>
We've bandied-about the issue of whether or not IDLs have any place in
REST, many times. Since I don't see the point of an IDL when using the
hypertext constraint, I don't see the point of IDL-as-microformat,
either. I've also never seen the point of a machine-readable service
document as an endpoint user-agents need to consult before taking
action -- what results is some other architectural style (where the
semantics of the URI mappings vary based on some hash table at an
"entry point" URI, instead of remaining RESTfully static).
So I can't answer your questions, since I don't know how this "fits in"
with REST, either. What hRESTS/SA-REST look like to me, is kludged-in
tooling support to more efficiently produce the HTTP APIs most folks
*call* REST APIs these days. Remember, I don't judge APIs by whether
they're RESTful, only how well they're suited to their purposes, so I'm
not scorning any project which may result in better APIs -- my *opinion*
is that this approach may even lead some people _to_ REST's hypertext
constraint, so it's probably a good thing, just mis-labeled.
We've also discussed machine-readability many times; there are those
who prefer machine-targeted data types, and those who prefer RDFa. I
see RDFa as a superior solution to microformats, for any purpose, and
hRESTS is another example of why -- instead of a general-purpose parsing
model, each microformat has its own unique parsing model, usually
defined as XSLT -- as is the case with hRESTS/SA-REST, which GRDDL-maps
its microformat to RDF, begging the question "why not just use RDFa?"
Interoperability is a concern; modular XHTML encompasses Xforms, which
gives the ability to "describe" more HTTP-method-rich APIs, but those
tokens collide with hRESTS -- which really shouldn't use class='label'
because that collides with <label>, as well. The reason it's easier to
create RDF vocabularies than it is to create markup languages (or even
microformats), is the vocabulary author doesn't have to worry about if
browsers' javascript forms-modules reserve 'label' as a keyword, etc.
In a nutshell, I don't see how using hRESTS/SA-REST would result in the
RESTful APIs I've done using Xforms/RDFa; although by solving what I
who doesn't use tooling for API development considers a non-problem, I
can see how I could've produced functionally-equivalent HTTP APIs in a
fraction of the time. Which seems to be the problem with any effort to
mass-produce RESTful APIs, what's lost in translation is all the design-
for-longevity goodness which distinguishes REST APIs from HTTP APIs.
The hRESTS/SA-REST approach intrigues me from an HTTP tinkerer
perspective, supporting server-parsed server-configuration-on-the-fly.
On one hand, this would philosophically violate separation of concerns;
OTOH, long-term maintenance of Web systems based on static files may
benefit from having fewer files to edit.
>
> 1. Where does SA-REST fit in? The service model [1] looks very
> similar to the service model of hRESTS [2]. According to the diagram
> at the start of [2], SA-REST sits on top of hRESTS - why then does
> SA-REST define a similar service model?
>
Perhaps you should ask this of the authors, since they're the same?
>
> 2. According to [2], SA-REST supports faceted search. Where are
> "p-lang-binding" and "data-format" [2] coming from? Why is there no
> mention of them in the W3C Submission ? [3].
>
That is a good question, and if it's machine readable, then where are
those tokens like PHP and Java defined and how is versioning accounted
for? I understand the problem of mashing up services which use
different formats, but I don't understand the rationale of solving this
by making services searchable by data format (or any other "facet"),
let alone what it has to do with REST other than attaching that name to
new IBM products which support this feature. Seems like tight coupling
to me, hence my confusion on calling it REST...
Particularly when their example of a RESTful Web Service is a JSON-RPC
endpoint.
>
> 3. In its service model [1], there is no lifting/lowering. How is
> this done when using SA-REST?
>
What *are* lifting and lowering?
-Eric
Hi Eric,
Thanks for the time you took on that detailed response - appreciate it. My
thesis is based on a mapping framework transforming XML Web Services (both SOAP
and POX) to RESTful HTTP format. This framework has two stages: a) a
configuration stage that sets up a mapping file and b) a run-time adapter that
transforms the messages based on the mapping file.
The advantages are that this framework enables the Web architecture (POST
can be replaced with GET in certain situations); the framework helps with
gradual migration from SOAP/POX to RESTful HTTP WS. It has constraints of
course, principally: URI limits for GET/DELETE and SOAP/POX POSTs which map
logically to multiple RESTful URI's are left untouched (i.e. as a POST).
Where the Semantic Web comes into it is in the mapping file that informs the
run-time adapter. The first version had a manual setup where the user matched
the XML WS operations to RESTful HTTP verbs. The second/current version uses the
Semantic Web to automate this process. Currently, I use SAWSDL for the XML WS
side and hRESTS/MicroWSMO on the RESTful WS side. A tool called Core Dashboard
enables me to annotate both services [1]. Both sides point to the "conceptually"
higher semantic ontology layer (WSMO-Lite is the standard I use for this layer).
In my thesis I will have to cover the alternatives and originally SA-REST
appeared to be that. However, the W3C submission is different to earlier
publications. My information is that it is a microformat (hRESTS/MicroWSMO)
versus RDFa (RDFa/SA-REST) option. So if that is the case is my logic below
correct:
1. It would appear that SA-REST is equivalent to (hRESTS + MicroWSMO). It has a
service model similar to hRESTS and using the SA-REST properties (domain-rel,
sem-rel and sem-class) can point to the semantic layer (as MicroWSMO does).
2. Then using RDFa, SA-REST has the ability to be serialised from XHTML as
RDF.
Thanks again,
Sean.
PS A lifting is an XSLT transformation that maps e.g. a SOAP message to the
"conceptually higher" semantic layer (an RDF file); a lowering is the opposite.
[1] Core Dashboard, http://coconut.tie.nl:8080/dashboard/#1304670463179
________________________________
From: Eric J. Bowman <eric@...>
To: Sean Kennedy <seandkennedy@yahoo.co.uk>
Cc: Rest Discussion Group <rest-discuss@yahoogroups.com>
Sent: Fri, 6 May, 2011 3:57:13
Subject: Re: [rest-discuss] SA-REST
Sean Kennedy wrote:
>
> I am trying to write up a section of my thesis on SA-REST.
>
Hopefully, explaining to the REST community what this is, and what it
has to do with REST? ;-) What's the overall topic of your thesis?
>
> I have a few q's that I am looking for help with:
>
We've bandied-about the issue of whether or not IDLs have any place in
REST, many times. Since I don't see the point of an IDL when using the
hypertext constraint, I don't see the point of IDL-as-microformat,
either. I've also never seen the point of a machine-readable service
document as an endpoint user-agents need to consult before taking
action -- what results is some other architectural style (where the
semantics of the URI mappings vary based on some hash table at an
"entry point" URI, instead of remaining RESTfully static).
So I can't answer your questions, since I don't know how this "fits in"
with REST, either. What hRESTS/SA-REST look like to me, is kludged-in
tooling support to more efficiently produce the HTTP APIs most folks
*call* REST APIs these days. Remember, I don't judge APIs by whether
they're RESTful, only how well they're suited to their purposes, so I'm
not scorning any project which may result in better APIs -- my *opinion*
is that this approach may even lead some people _to_ REST's hypertext
constraint, so it's probably a good thing, just mis-labeled.
We've also discussed machine-readability many times; there are those
who prefer machine-targeted data types, and those who prefer RDFa. I
see RDFa as a superior solution to microformats, for any purpose, and
hRESTS is another example of why -- instead of a general-purpose parsing
model, each microformat has its own unique parsing model, usually
defined as XSLT -- as is the case with hRESTS/SA-REST, which GRDDL-maps
its microformat to RDF, begging the question "why not just use RDFa?"
Interoperability is a concern; modular XHTML encompasses Xforms, which
gives the ability to "describe" more HTTP-method-rich APIs, but those
tokens collide with hRESTS -- which really shouldn't use class='label'
because that collides with <label>, as well. The reason it's easier to
create RDF vocabularies than it is to create markup languages (or even
microformats), is the vocabulary author doesn't have to worry about if
browsers' javascript forms-modules reserve 'label' as a keyword, etc.
In a nutshell, I don't see how using hRESTS/SA-REST would result in the
RESTful APIs I've done using Xforms/RDFa; although by solving what I
who doesn't use tooling for API development considers a non-problem, I
can see how I could've produced functionally-equivalent HTTP APIs in a
fraction of the time. Which seems to be the problem with any effort to
mass-produce RESTful APIs, what's lost in translation is all the design-
for-longevity goodness which distinguishes REST APIs from HTTP APIs.
The hRESTS/SA-REST approach intrigues me from an HTTP tinkerer
perspective, supporting server-parsed server-configuration-on-the-fly.
On one hand, this would philosophically violate separation of concerns;
OTOH, long-term maintenance of Web systems based on static files may
benefit from having fewer files to edit.
>
> 1. Where does SA-REST fit in? The service model [1] looks very
> similar to the service model of hRESTS [2]. According to the diagram
> at the start of [2], SA-REST sits on top of hRESTS - why then does
> SA-REST define a similar service model?
>
Perhaps you should ask this of the authors, since they're the same?
>
> 2. According to [2], SA-REST supports faceted search. Where are
> "p-lang-binding" and "data-format" [2] coming from? Why is there no
> mention of them in the W3C Submission ? [3].
>
That is a good question, and if it's machine readable, then where are
those tokens like PHP and Java defined and how is versioning accounted
for? I understand the problem of mashing up services which use
different formats, but I don't understand the rationale of solving this
by making services searchable by data format (or any other "facet"),
let alone what it has to do with REST other than attaching that name to
new IBM products which support this feature. Seems like tight coupling
to me, hence my confusion on calling it REST...
Particularly when their example of a RESTful Web Service is a JSON-RPC
endpoint.
>
> 3. In its service model [1], there is no lifting/lowering. How is
> this done when using SA-REST?
>
What *are* lifting and lowering?
-Eric
It is often stated, that a client just have to know one starting URI to get what the service has to offer. Still, is there some common way (or media type) to do this? Is WADL a bad idea (some are stating, WADL is "unRESTful")? Or just use a bunch of links? Regards, Jakob
 Jakob On 11 May, 2011,at 01:56 PM, Jakob Strauch <jakob.strauch@web.de> wrote: > It is often stated, that a client just have to know one starting URI to get what the service has to offer. Right, one of many possible entry points. Every URI that makes sense to you to bookmark makes for a fine entry point. > > Still, is there some common way (or media type) to do this? What do you mean? If your user agent understands the media types offered by the server the user agent will be able to proceed through the application it intends. There is no need for any special media type. However, you surely must know what service you want to interact with to configure the application (meaning: select the appropriate entry URI. If your intention is to buy a book, http://www.bahn.de is probably a bad choice. Note that this selection is not based on any technical service description but on knowledge about the nature of the service. This is what often confuses people because they equate the description of the service kind/nature (e.g. a Blog server) with its API description. Both are completely orthogonal issues - it is just that RPC-style connectors (e.g. WS-*) tend to cause the impression that they are one and the same. Thus, simply determine where to point your user agent at and then let HTTP figure out the rest. There is no need whatsoever for service description in the sense of API descriptions. > Is WADL a bad idea (some are stating, WADL is "unRESTful")? Sure - it tells the client more than it should know. WADL causes just the coupling you actually want to eliminate when you are using REST. > > Or just use a bunch of links? Bingo :-) Jan > > Regards, > Jakob > > > > ------------------------------------ > > Yahoo! Groups Links > > > 
On 11 May, 2011,at 01:56 PM, Jakob Strauch <jakob.strauch@...> wrote: > It is often stated, that a client just have to know one starting URI to get what the service has to offer. > > Still, is there some common way (or media type) to do this? You might be interested in how to use DNS for discovery: http://www.infoq.com/articles/rest-discovery-dns Jan  > Is WADL a bad idea (some are stating, WADL is "unRESTful")? > > Or just use a bunch of links? > > Regards, > Jakob > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Hi Jakob, On Wed, May 11, 2011 at 1:56 PM, Jakob Strauch <jakob.strauch@web.de> wrote: > > > It is often stated, that a client just have to know one starting URI to get > what the service has to offer. > > Still, is there some common way (or media type) to do this? Is WADL a bad > idea (some are stating, WADL is "unRESTful")? > > Or just use a bunch of links? > The hypermedia tenet states that the should should have no prior knowledge about the whole structure of the service. So, yes, you should feed your consumers with semantic ( rel="payment", for example ) hypermedia controls. > > Regards, > Jakob > > > -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
Ok, i understand that (i think ;-) ). But would a Mediatype for "just a bunch of links" not be useful? Even it would result in a simplictic media type?
Right now, i´m able to define the links like:
<links>
<link rel="payment" .../>
<link rel="otherstuff" .../>
...
</links>
or like:
<service>
<toplevelressources>
<link rel="payment" .../>
<link rel="otherstuff" .../>
...
</toplevelressources>
</service>
or whatever...
Sure, you have to tell your clients, what media types your offering (domain-specific). But do i have to tell the clients (of maybe different applications/domains) how to interpret the uri lists above?
--- In rest-discuss@yahoogroups.com, Alessandro Nadalin <alessandro.nadalin@...> wrote:
>
> Hi Jakob,
>
> On Wed, May 11, 2011 at 1:56 PM, Jakob Strauch <jakob.strauch@...> wrote:
>
> >
> >
> > It is often stated, that a client just have to know one starting URI to get
> > what the service has to offer.
> >
> > Still, is there some common way (or media type) to do this? Is WADL a bad
> > idea (some are stating, WADL is "unRESTful")?
> >
> > Or just use a bunch of links?
> >
>
> The hypermedia tenet states that the should should have no prior knowledge
> about the whole structure of the service.
>
> So, yes, you should feed your consumers with semantic ( rel="payment", for
> example ) hypermedia controls.
>
>
> >
> > Regards,
> > Jakob
> >
> >
> >
>
>
>
> --
> Nadalin Alessandro
> www.odino.org
> www.twitter.com/_odino_
>
On Wed, May 11, 2011 at 2:55 PM, Jakob Strauch <jakob.strauch@...> wrote: > > > Ok, i understand that (i think ;-) ). But would a Mediatype for "just a > bunch of links" not be useful? Even it would result in a simplictic media > type? > Atom seems good for you ;-) http://www.ietf.org/rfc/rfc4287.txt > > Right now, i´m able to define the links like: > > <links> > <link rel="payment" .../> > <link rel="otherstuff" .../> > ... > </links> > > or like: > > <service> > <toplevelressources> > <link rel="payment" .../> > <link rel="otherstuff" .../> > ... > </toplevelressources> > </service> > > or whatever... > > Sure, you have to tell your clients, what media types your offering > (domain-specific). But do i have to tell the clients (of maybe different > applications/domains) how to interpret the uri lists above? > > --- In rest-discuss@yahoogroups.com, Alessandro Nadalin > <alessandro.nadalin@...> wrote: > > > > Hi Jakob, > > > > On Wed, May 11, 2011 at 1:56 PM, Jakob Strauch <jakob.strauch@...> > wrote: > > > > > > > > > > > It is often stated, that a client just have to know one starting URI to > get > > > what the service has to offer. > > > > > > Still, is there some common way (or media type) to do this? Is WADL a > bad > > > idea (some are stating, WADL is "unRESTful")? > > > > > > Or just use a bunch of links? > > > > > > > The hypermedia tenet states that the should should have no prior > knowledge > > about the whole structure of the service. > > > > So, yes, you should feed your consumers with semantic ( rel="payment", > for > > example ) hypermedia controls. > > > > > > > > > > Regards, > > > Jakob > > > > > > > > > > > > > > > > > -- > > Nadalin Alessandro > > www.odino.org > > www.twitter.com/_odino_ > > > > > -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
Yes, you should tell the clients how to interpret the rest service. Just look at the Google API documentation, most of Google API is a REST service. It might give you a clue on how to use a documentation to tell the clients how to access your REST service. On Wed, May 11, 2011 at 7:55 PM, Jakob Strauch <jakob.strauch@...> wrote: > > > Sure, you have to tell your clients, what media types your offering > (domain-specific). But do i have to tell the clients (of maybe different > applications/domains) how to interpret the uri lists above? >
On May 11, 2011, at 2:55 PM, Jakob Strauch wrote: > But do i have to tell the clients (of maybe different applications/domains) how to interpret the uri lists above? Yes, the media type tells them. That *is* the contract between client and server. Link relation definitions, while not a media type, are conceptually similar. If the client understands them (IOW has been programmed to understand them) the client can meaningfully react on those links. Otherwise they are just links (which is not at all bad for something like a crawler). Jan
On May 11, 2011, at 3:18 PM, Reza Lesmana wrote: > > > Yes, you should tell the clients how to interpret the rest service. > > Just look at the Google API documentation, most of Google API is a REST service. Strictly speaking not, because Google APIs do not use specific global media types - Google owns the contract. However, its close given the ubiquitousness and relative 'friendly behavior' of Google. Jan > It might give you a clue on how to use a documentation to tell the clients how to access your REST service. > > On Wed, May 11, 2011 at 7:55 PM, Jakob Strauch <jakob.strauch@...> wrote: > > Sure, you have to tell your clients, what media types your offering (domain-specific). But do i have to tell the clients (of maybe different applications/domains) how to interpret the uri lists above? > > > >
Yes i know, that the media type one part of a contract is. Someone send me a private reply, which addresses what i meant: ---- Look for XRDS and XRD (http://www.oasis-open.org/committees/download.php/32686/xrd-1.0-wd01.html) HTML is also pretty good (a list of anchors with "rel=..." attributes). Maybe also LRDD (http://tools.ietf.org/html/draft-hammer-discovery-06) ----- I think i would use conneg here to provide a human readable version with simple HTML anchors and also, for instance, a XRD for describing the links to my ressources. But HTML has bad support for URI Templates and XRD seems to be a working draft for 2 years... --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On May 11, 2011, at 2:55 PM, Jakob Strauch wrote: > > > But do i have to tell the clients (of maybe different applications/domains) how to interpret the uri lists above? > > Yes, the media type tells them. That *is* the contract between client and server. > > Link relation definitions, while not a media type, are conceptually similar. If the client understands them (IOW has been programmed to understand them) the client can meaningfully react on those links. Otherwise they are just links (which is not at all bad for something like a crawler). > > Jan >
On May 11, 2011, at 6:08 PM, Jakob Strauch wrote: > Yes i know, that the media type one part of a contract is. Someone send me a private reply, which addresses what i meant: > > ---- > Look for XRDS and XRD (http://www.oasis-open.org/committees/download.php/32686/xrd-1.0-wd01.html) > > HTML is also pretty good (a list of anchors with "rel=..." attributes). > > Maybe also LRDD (http://tools.ietf.org/html/draft-hammer-discovery-06) > ----- > > I think i would use conneg here to provide a human readable version with simple HTML anchors and also, for instance, a XRD for describing the links to my ressources. But HTML has bad support for URI Templates and XRD seems to be a working draft for 2 years... Why don't you just define a media type that covers the semantics your intended applications need? Why do you need a description format *and* then specify your specific semantics in addition to that? Jan > > > --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: >> >> >> On May 11, 2011, at 2:55 PM, Jakob Strauch wrote: >> >>> But do i have to tell the clients (of maybe different applications/domains) how to interpret the uri lists above? >> >> Yes, the media type tells them. That *is* the contract between client and server. >> >> Link relation definitions, while not a media type, are conceptually similar. If the client understands them (IOW has been programmed to understand them) the client can meaningfully react on those links. Otherwise they are just links (which is not at all bad for something like a crawler). >> >> Jan >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
<snip> > But HTML has bad support for URI Templates and XRD seems to be a working > draft for 2 years... Then maybe URI templates are a bad idea... maybe we should stick with <form>s? -Solomon
Sean, Hope you took a look at Hypermedia and Model-Driven Development presented in WS-REST 2011. The links for the paper and slides are here: http://ws-rest.org/2011/proc/a2-liskin.pdf and http://ws-rest.org/2011/proc/s2-liskin.pdf I too am working on creating a similar tool. I'm thinking on how to go about implementing the client framework which should work on any SOAP to RESTful service. In case you have come across any design for the same please share. Thanks! Best regards, Saravan. ________________________________ From: Sean Kennedy <seandkennedy@yahoo.co.uk> To: Eric J. Bowman <eric@...> Cc: Rest Discussion Group <rest-discuss@yahoogroups.com> Sent: Fri, 6 May, 2011 4:31:38 PM Subject: Re: [rest-discuss] SA-REST  Hi Eric,     Thanks for the time you took on that detailed response - appreciate it. My thesis is based on a mapping framework transforming XML Web Services (both SOAP and POX) to RESTful HTTP format. This framework has two stages: a) a configuration stage that sets up a mapping file and b) a run-time adapter that transforms the messages based on the mapping file.    The advantages are that this framework enables the Web architecture (POST can be replaced with GET in certain situations); the framework helps with gradual migration from SOAP/POX to RESTful HTTP WS. It has constraints of course, principally: URI limits for GET/DELETE and SOAP/POX POSTs which map logically to multiple RESTful URI's are left untouched (i.e. as a POST).    Where the Semantic Web comes into it is in the mapping file that informs the run-time adapter. The first version had a manual setup where the user matched the XML WS operations to RESTful HTTP verbs. The second/current version uses the Semantic Web to automate this process. Currently, I use SAWSDL for the XML WS side and hRESTS/MicroWSMO on the RESTful WS side. A tool called Core Dashboard enables me to annotate both services [1]. Both sides point to the "conceptually" higher semantic ontology layer (WSMO-Lite is the standard I use for this layer).    In my thesis I will have to cover the alternatives and originally SA-REST appeared to be that. However, the W3C submission is different to earlier publications. My information is that it is a microformat (hRESTS/MicroWSMO) versus RDFa (RDFa/SA-REST) option. So if that is the case is my logic below correct: 1. It would appear that SA-REST is equivalent to (hRESTS + MicroWSMO). It has a service model similar to hRESTS and using the SA-REST properties (domain-rel, sem-rel and sem-class) can point to the semantic layer (as MicroWSMO does). 2. Then using RDFa, SA-REST has the ability to be serialised from XHTML as RDF. Thanks again, Sean. PS A lifting is an XSLT transformation that maps e.g. a SOAP message to the "conceptually higher" semantic layer (an RDF file); a lowering is the opposite. [1] Core Dashboard, http://coconut.tie.nl:8080/dashboard/#1304670463179 ________________________________ From: Eric J. Bowman <eric@bisonsystems.net> To: Sean Kennedy <seandkennedy@...> Cc: Rest Discussion Group <rest-discuss@yahoogroups.com> Sent: Fri, 6 May, 2011 3:57:13 Subject: Re: [rest-discuss] SA-REST Sean Kennedy wrote: > > I am trying to write up a section of my thesis on SA-REST. > Hopefully, explaining to the REST community what this is, and what it has to do with REST? ;-) What's the overall topic of your thesis? > > I have a few q's that I am looking for help with: > We've bandied-about the issue of whether or not IDLs have any place in REST, many times. Since I don't see the point of an IDL when using the hypertext constraint, I don't see the point of IDL-as-microformat, either. I've also never seen the point of a machine-readable service document as an endpoint user-agents need to consult before taking action -- what results is some other architectural style (where the semantics of the URI mappings vary based on some hash table at an "entry point" URI, instead of remaining RESTfully static). So I can't answer your questions, since I don't know how this "fits in" with REST, either. What hRESTS/SA-REST look like to me, is kludged-in tooling support to more efficiently produce the HTTP APIs most folks *call* REST APIs these days. Remember, I don't judge APIs by whether they're RESTful, only how well they're suited to their purposes, so I'm not scorning any project which may result in better APIs -- my *opinion* is that this approach may even lead some people _to_ REST's hypertext constraint, so it's probably a good thing, just mis-labeled. We've also discussed machine-readability many times; there are those who prefer machine-targeted data types, and those who prefer RDFa. I see RDFa as a superior solution to microformats, for any purpose, and hRESTS is another example of why -- instead of a general-purpose parsing model, each microformat has its own unique parsing model, usually defined as XSLT -- as is the case with hRESTS/SA-REST, which GRDDL-maps its microformat to RDF, begging the question "why not just use RDFa?" Interoperability is a concern; modular XHTML encompasses Xforms, which gives the ability to "describe" more HTTP-method-rich APIs, but those tokens collide with hRESTS -- which really shouldn't use class='label' because that collides with <label>, as well. The reason it's easier to create RDF vocabularies than it is to create markup languages (or even microformats), is the vocabulary author doesn't have to worry about if browsers' javascript forms-modules reserve 'label' as a keyword, etc. In a nutshell, I don't see how using hRESTS/SA-REST would result in the RESTful APIs I've done using Xforms/RDFa; although by solving what I who doesn't use tooling for API development considers a non-problem, I can see how I could've produced functionally-equivalent HTTP APIs in a fraction of the time. Which seems to be the problem with any effort to mass-produce RESTful APIs, what's lost in translation is all the design- for-longevity goodness which distinguishes REST APIs from HTTP APIs. The hRESTS/SA-REST approach intrigues me from an HTTP tinkerer perspective, supporting server-parsed server-configuration-on-the-fly. On one hand, this would philosophically violate separation of concerns; OTOH, long-term maintenance of Web systems based on static files may benefit from having fewer files to edit. > > 1. Where does SA-REST fit in? The service model [1] looks very > similar to the service model of hRESTS [2]. According to the diagram > at the start of [2], SA-REST sits on top of hRESTS - why then does > SA-REST define a similar service model? > Perhaps you should ask this of the authors, since they're the same? > > 2. According to [2], SA-REST supports faceted search. Where are > "p-lang-binding" and "data-format" [2] coming from? Why is there no > mention of them in the W3C Submission ? [3]. > That is a good question, and if it's machine readable, then where are those tokens like PHP and Java defined and how is versioning accounted for? I understand the problem of mashing up services which use different formats, but I don't understand the rationale of solving this by making services searchable by data format (or any other "facet"), let alone what it has to do with REST other than attaching that name to new IBM products which support this feature. Seems like tight coupling to me, hence my confusion on calling it REST... Particularly when their example of a RESTful Web Service is a JSON-RPC endpoint. > > 3. In its service model [1], there is no lifting/lowering. How is > this done when using SA-REST? > What *are* lifting and lowering? -Eric
Hi Saravan,
Thanks for that. Some work that may be of interest to you is
http://dev.aol.com/rest_and_soap_sharing (thanks to Stefan Tilkov for that
link).
Regards,
Sean.
________________________________
From: Saravanakumaar Jeyabalan <jsarava@...>
To: Sean Kennedy <seandkennedy@yahoo.co.uk>; Eric J. Bowman
<eric@...>
Cc: Rest Discussion Group <rest-discuss@yahoogroups.com>
Sent: Fri, 13 May, 2011 2:08:58
Subject: Re: [rest-discuss] SA-REST
Sean,
Hope you took a look at Hypermedia and Model-Driven Development presented in
WS-REST 2011. The links for the paper and slides are here:
http://ws-rest.org/2011/proc/a2-liskin.pdf and
http://ws-rest.org/2011/proc/s2-liskin.pdf
I too am working on creating a similar tool. I'm thinking on how to go about
implementing the client framework which should work on any SOAP to RESTful
service. In case you have come across any design for the same please share.
Thanks!
Best regards,
Saravan.
________________________________
From: Sean Kennedy <seandkennedy@...>
To: Eric J. Bowman <eric@...>
Cc: Rest Discussion Group <rest-discuss@yahoogroups.com>
Sent: Fri, 6 May, 2011 4:31:38 PM
Subject: Re: [rest-discuss] SA-REST
Hi Eric,
Thanks for the time you took on that detailed response - appreciate it. My
thesis is based on a mapping framework transforming XML Web Services (both SOAP
and POX) to RESTful HTTP format. This framework has two stages: a) a
configuration stage that sets up a mapping file and b) a run-time adapter that
transforms the messages based on the mapping file.
The advantages are that this framework enables the Web architecture (POST
can be replaced with GET in certain situations); the framework helps with
gradual migration from SOAP/POX to RESTful HTTP WS. It has constraints of
course, principally: URI limits for GET/DELETE and SOAP/POX POSTs which map
logically to multiple RESTful URI's are left untouched (i.e. as a POST).
Where the Semantic Web comes into it is in the mapping file that informs the
run-time adapter. The first version had a manual setup where the user matched
the XML WS operations to RESTful HTTP verbs. The second/current version uses the
Semantic Web to automate this process. Currently, I use SAWSDL for the XML WS
side and hRESTS/MicroWSMO on the RESTful WS side. A tool called Core Dashboard
enables me to annotate both services [1]. Both sides point to the "conceptually"
higher semantic ontology layer (WSMO-Lite is the standard I use for this layer).
In my thesis I will have to cover the alternatives and originally SA-REST
appeared to be that. However, the W3C submission is different to earlier
publications. My information is that it is a microformat (hRESTS/MicroWSMO)
versus RDFa (RDFa/SA-REST) option. So if that is the case is my logic below
correct:
1. It would appear that SA-REST is equivalent to (hRESTS + MicroWSMO). It has a
service model similar to hRESTS and using the SA-REST properties (domain-rel,
sem-rel and sem-class) can point to the semantic layer (as MicroWSMO does).
2. Then using RDFa, SA-REST has the ability to be serialised from XHTML as
RDF.
Thanks again,
Sean.
PS A lifting is an XSLT transformation that maps e.g. a SOAP message to the
"conceptually higher" semantic layer (an RDF file); a lowering is the opposite.
[1] Core Dashboard, http://coconut.tie.nl:8080/dashboard/#1304670463179
________________________________
From: Eric J. Bowman <eric@bisonsystems.net>
To: Sean Kennedy <seandkennedy@...>
Cc: Rest Discussion Group <rest-discuss@yahoogroups.com>
Sent: Fri, 6 May, 2011 3:57:13
Subject: Re: [rest-discuss] SA-REST
Sean Kennedy wrote:
>
> I am trying to write up a section of my thesis on SA-REST.
>
Hopefully, explaining to the REST community what this is, and what it
has to do with REST? ;-) What's the overall topic of your thesis?
>
> I have a few q's that I am looking for help with:
>
We've bandied-about the issue of whether or not IDLs have any place in
REST, many times. Since I don't see the point of an IDL when using the
hypertext constraint, I don't see the point of IDL-as-microformat,
either. I've also never seen the point of a machine-readable service
document as an endpoint user-agents need to consult before taking
action -- what results is some other architectural style (where the
semantics of the URI mappings vary based on some hash table at an
"entry point" URI, instead of remaining RESTfully static).
So I can't answer your questions, since I don't know how this "fits in"
with REST, either. What hRESTS/SA-REST look like to me, is kludged-in
tooling support to more efficiently produce the HTTP APIs most folks
*call* REST APIs these days. Remember, I don't judge APIs by whether
they're RESTful, only how well they're suited to their purposes, so I'm
not scorning any project which may result in better APIs -- my *opinion*
is that this approach may even lead some people _to_ REST's hypertext
constraint, so it's probably a good thing, just mis-labeled.
We've also discussed machine-readability many times; there are those
who prefer machine-targeted data types, and those who prefer RDFa. I
see RDFa as a superior solution to microformats, for any purpose, and
hRESTS is another example of why -- instead of a general-purpose parsing
model, each microformat has its own unique parsing model, usually
defined as XSLT -- as is the case with hRESTS/SA-REST, which GRDDL-maps
its microformat to RDF, begging the question "why not just use RDFa?"
Interoperability is a concern; modular XHTML encompasses Xforms, which
gives the ability to "describe" more HTTP-method-rich APIs, but those
tokens collide with hRESTS -- which really shouldn't use class='label'
because that collides with <label>, as well. The reason it's easier to
create RDF vocabularies than it is to create markup languages (or even
microformats), is the vocabulary author doesn't have to worry about if
browsers' javascript forms-modules reserve 'label' as a keyword, etc.
In a nutshell, I don't see how using hRESTS/SA-REST would result in the
RESTful APIs I've done using Xforms/RDFa; although by solving what I
who doesn't use tooling for API development considers a non-problem, I
can see how I could've produced functionally-equivalent HTTP APIs in a
fraction of the time. Which seems to be the problem with any effort to
mass-produce RESTful APIs, what's lost in translation is all the design-
for-longevity goodness which distinguishes REST APIs from HTTP APIs.
The hRESTS/SA-REST approach intrigues me from an HTTP tinkerer
perspective, supporting server-parsed server-configuration-on-the-fly.
On one hand, this would philosophically violate separation of concerns;
OTOH, long-term maintenance of Web systems based on static files may
benefit from having fewer files to edit.
>
> 1. Where does SA-REST fit in? The service model [1] looks very
> similar to the service model of hRESTS [2]. According to the diagram
> at the start of [2], SA-REST sits on top of hRESTS - why then does
> SA-REST define a similar service model?
>
Perhaps you should ask this of the authors, since they're the same?
>
> 2. According to [2], SA-REST supports faceted search. Where are
> "p-lang-binding" and "data-format" [2] coming from? Why is there no
> mention of them in the W3C Submission ? [3].
>
That is a good question, and if it's machine readable, then where are
those tokens like PHP and Java defined and how is versioning accounted
for? I understand the problem of mashing up services which use
different formats, but I don't understand the rationale of solving this
by making services searchable by data format (or any other "facet"),
let alone what it has to do with REST other than attaching that name to
new IBM products which support this feature. Seems like tight coupling
to me, hence my confusion on calling it REST...
Particularly when their example of a RESTful Web Service is a JSON-RPC
endpoint.
>
> 3. In its service model [1], there is no lifting/lowering. How is
> this done when using SA-REST?
>
What *are* lifting and lowering?
-Eric
Hello all,
I am developing an API & a consumer for a Yellow Pages like system. I
have confusion about two use cases as described below:
1. GET or POST
User searches for vendors (plumbers) in a given area & sees list of
vendors on page. As a requirement, vendors get notified that a user has
found him for the search term, so that he can contact the customer.
The intention of the API user will be to get vendors, so it seems to be
a pure GET contender; but the unintended side effect is creation of
these notifications. As per the specification, HTTP verb should indicate
the intent of the user, but in this case there is a pure object
creation. What should we use, POST or GET?
2. GET or PUT
Vendor requests for notifications on which he wants to act. I don't want
to show him notifications which he has already seen, so while delivering
notifications I mark them as delivered. Though the intention is pure
retrieval the side effect if object update. Also an aspect of request is
to get only unseen notifications, which guarantees that consecutive
similar request won't be idempotent. What should I use, GET or PUT?
Thanks for reading a really long post, I would really appreciate any
guidance I can get on it.
--
Regards,
Aakash Dharmadhikari
C42 Engineering, http://c42.in/
On 16 May, 2011,at 08:09 PM, aakash dharmadhikari <aakashd@...> wrote: > > > Hello all, > > I am developing an API & a consumer for a Yellow Pages like system. I have confusion about two use cases as described below: > 1. GET or POST > User searches for vendors (plumbers) in a given area & sees list of vendors on page. As a requirement, vendors get notified that a user has found him for the search term, so that he can contact the customer. > > The intention of the API user will be to get vendors, so it seems to be a pure GET contender; but the unintended side effect is creation of these notifications. As per the specification, HTTP verb should indicate the intent of the user, but in this case there is a pure object creation. What should we use, POST or GET? Hmm, that is a tricky question because. GET would be it, because the side effect is an implementation detail. However, you seem to be reaching behind the user's back here because you distribute her contact details to the vendors. Maybe a POST needs to be used with a checkbox where the user can submit her agreement for the notification? Jan > > > 2. GET or PUT > Vendor requests for notifications on which he wants to act. I don't want to show him notifications which he has already seen, so while delivering notifications I mark them as delivered. Though the intention is pure retrieval the side effect if object update. Also an aspect of request is to get only unseen notifications, which guarantees that consecutive similar request won't be idempotent. What should I use, GET or PUT? > > Thanks for reading a really long post, I would really appreciate any guidance I can get on it. > -- > > Regards, > Aakash Dharmadhikari > C42 Engineering, http://c42.in/ > > >
Point taken Norman. The confusion about the GET & POST was caused because of the RFC itself. Broaching a bigger problem, I see that HTTP verbs give more importance to the users intentions & REST talks continuously about server state. But when I use HTTP as a carrier for REST requests, I see these conflicts all the time. Is there a rule of thumb, as to REST always overrides HTTP or the other way around? About the business scene, user is aware of the sites functioning & vendor contacting him is a desired & explicitly communicated effect of the search. Regards, Aakash Dharmadhikari On 17/05/11 7:16 PM, Norman Gray wrote: > Aakash, hello. > > On 2011 May 16, at 20:09, aakash dharmadhikari wrote: > >> 1. GET or POST >> >> User searches for vendors (plumbers) in a given area& sees list of vendors on page. As a requirement, vendors get notified that a user has found him for the search term, so that he can contact the customer. >> >> The intention of the API user will be to get vendors, so it seems to be a pure GET contender; but the unintended side effect is creation of these notifications. As per the specification, HTTP verb should indicate the intent of the user, but in this case there is a pure object creation. What should we use, POST or GET? > That sounds like you should GET. > > RFC 2616 section 9.1.1 says of 'idempotent' methods such as GET: > >> The important >> distinction here is that the user did not request the side-effects, >> so therefore cannot be held accountable for them. > In other words, if your service has server-side side-effects, the HTTP spec says that's your problem, not the user's. > > Just by the way, if I as a customer were contacted by a merchant when all I'd done was look at their address, I think I'd react poorly. At length, and with gestures. But I presume you know your users. > > Best wishes, > > Norman > >
FWIW, a good "rule of thumb" for determining if GET is an appropriate method is to imagine the results of a search bot (e.g. google, yahoo, bing, etc.) making a request to that link. for example, a list of links on a page that look like this: GET /my-profile/?action=send-my-email-address-to-everyone is something I would *not* want to allow the google-bot to execute This is true even if the link looked like this: GET /my-profile/ and the backend process resulted in sending my email address to everyone. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Tue, May 17, 2011 at 14:29, aakash dharmadhikari <aakashd@...> wrote: > Point taken Norman. The confusion about the GET & POST was caused > because of the RFC itself. > > Broaching a bigger problem, I see that HTTP verbs give more importance > to the users intentions & REST talks continuously about server state. > But when I use HTTP as a carrier for REST requests, I see these > conflicts all the time. > > Is there a rule of thumb, as to REST always overrides HTTP or the other > way around? > > About the business scene, user is aware of the sites functioning & > vendor contacting him is a desired & explicitly communicated effect of > the search. > > Regards, > Aakash Dharmadhikari > > > On 17/05/11 7:16 PM, Norman Gray wrote: >> Aakash, hello. >> >> On 2011 May 16, at 20:09, aakash dharmadhikari wrote: >> >>> 1. GET or POST >>> >>> User searches for vendors (plumbers) in a given area& sees list of vendors on page. As a requirement, vendors get notified that a user has found him for the search term, so that he can contact the customer. >>> >>> The intention of the API user will be to get vendors, so it seems to be a pure GET contender; but the unintended side effect is creation of these notifications. As per the specification, HTTP verb should indicate the intent of the user, but in this case there is a pure object creation. What should we use, POST or GET? >> That sounds like you should GET. >> >> RFC 2616 section 9.1.1 says of 'idempotent' methods such as GET: >> >>> The important >>> distinction here is that the user did not request the side-effects, >>> so therefore cannot be held accountable for them. >> In other words, if your service has server-side side-effects, the HTTP spec says that's your problem, not the user's. >> >> Just by the way, if I as a customer were contacted by a merchant when all I'd done was look at their address, I think I'd react poorly. At length, and with gestures. But I presume you know your users. >> >> Best wishes, >> >> Norman >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Just twisting the scenario a little bit. As a user I make a search request, which results in vendors knowing what I am interested in & they respond with quotes for the same. I see those quotes on my screen as a small window, which updates itself every 15 seconds & I can decide which vendor I want to talk to. I hope this sounds less scary that all vendors getting my email address. But coming back to the point, is any side effect is GET request? What about a scenario, where system learns from my interactions with the application & because of which it changes its response to suite my needs. So any two consecutive requests, though identical, won't be idempotent. Should such request be GET or POST or PUT? Regards, Aakash Dharmadhikari On 18/05/11 12:04 AM, mike amundsen wrote: > FWIW, a good "rule of thumb" for determining if GET is an appropriate > method is to imagine the results of a search bot (e.g. google, yahoo, > bing, etc.) making a request to that link. > > for example, a list of links on a page that look like this: > GET /my-profile/?action=send-my-email-address-to-everyone > is something I would *not* want to allow the google-bot to execute > > This is true even if the link looked like this: > GET /my-profile/ > and the backend process resulted in sending my email address to everyone. > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2010 > http://rest-fest.googlecode.com > > > > > On Tue, May 17, 2011 at 14:29, aakash dharmadhikari<aakashd@...> wrote: >> Point taken Norman. The confusion about the GET& POST was caused >> because of the RFC itself. >> >> Broaching a bigger problem, I see that HTTP verbs give more importance >> to the users intentions& REST talks continuously about server state. >> But when I use HTTP as a carrier for REST requests, I see these >> conflicts all the time. >> >> Is there a rule of thumb, as to REST always overrides HTTP or the other >> way around? >> >> About the business scene, user is aware of the sites functioning& >> vendor contacting him is a desired& explicitly communicated effect of >> the search. >> >> Regards, >> Aakash Dharmadhikari >> >> >> On 17/05/11 7:16 PM, Norman Gray wrote: >>> Aakash, hello. >>> >>> On 2011 May 16, at 20:09, aakash dharmadhikari wrote: >>> >>>> 1. GET or POST >>>> >>>> User searches for vendors (plumbers) in a given area& sees list of vendors on page. As a requirement, vendors get notified that a user has found him for the search term, so that he can contact the customer. >>>> >>>> The intention of the API user will be to get vendors, so it seems to be a pure GET contender; but the unintended side effect is creation of these notifications. As per the specification, HTTP verb should indicate the intent of the user, but in this case there is a pure object creation. What should we use, POST or GET? >>> That sounds like you should GET. >>> >>> RFC 2616 section 9.1.1 says of 'idempotent' methods such as GET: >>> >>>> The important >>>> distinction here is that the user did not request the side-effects, >>>> so therefore cannot be held accountable for them. >>> In other words, if your service has server-side side-effects, the HTTP spec says that's your problem, not the user's. >>> >>> Just by the way, if I as a customer were contacted by a merchant when all I'd done was look at their address, I think I'd react poorly. At length, and with gestures. But I presume you know your users. >>> >>> Best wishes, >>> >>> Norman >>> >>> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >>
Thanks a lot Steven. I have already changed the system to work like twitter timeline fetch, where client provides the latest notification it has, and asks for newer ones. This was crucial to solve another issue, that multiple clients with same credentials should get same notifications, as I am not aware of which client end user is in front of. In earlier approach, if an open browser window has already fetched & marked a notification as seen; a mobile client, which vendor is actually using, will never be able to fetch the same notification again. But the If-Modified-Since is very clever use of the header, I will certainly try to use it if possible. Regards, Aakash Dharmadhikari On 18/05/11 12:07 AM, Steven Cummings wrote: > On Mon, May 16, 2011 at 1:09 PM, aakash dharmadhikari > <aakashd@... <mailto:aakashd@...>> wrote: > > 2. GET or PUT Vendor requests for notifications on which he wants > to act. I don't want to show him notifications which he has > already seen, so while delivering notifications I mark them as > delivered. Though the intention is pure retrieval the side effect > if object update. Also an aspect of request is to get only unseen > notifications, which guarantees that consecutive similar request > won't be idempotent. What should I use, GET or PUT? > > > On this one you could provide the client the ability to specify a date > after which notifications would be new or "unread" to that client. > This could be achieved either through query parameters to the > notification list or If-Modified-Since (conditional GET) on the > notification list as a whole resource. In the latter case the > list-resource should return a Last-Modified header. This mechanism is > intended for cases just like this, I think. >
Aakash: I'm not clear on what you are asking here. The only point I was making was to give you a "mental exercise" for making your own decision on what you want your implementation to do. I think you can use the example I gave to view your implementation from both the "user" and "server implementor" point of view. That, I think is the approach you should take. As for your example here, I can't really comment on what is "scary" or "expected" from the users of your implementation. I suspect we all have our own opinions of your description, but I doubt many of us have a proper "context" with which to provide you helpful advice on this particular example. From the transfer protocol point of view, users are not "responsible" for any side effects occurring from a GET as the protocol clearly states that GET should be treated as safe and idempotent. How you choose to implement your server is up to you. As long as it honors this small set of semantic expectations, you'll have met your responsibilities as a server implementer. If it turns out most of your users are unhappy with the side effects of using GET on a particular page, they may come to decide your implementation is undesirable. But those users' opinions of your work will not be due to any prose they find in RFC2616<g>. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2010 http://rest-fest.googlecode.com On Tue, May 17, 2011 at 14:42, aakash dharmadhikari <aakashd@gmail.com> wrote: > Just twisting the scenario a little bit. > > As a user I make a search request, which results in vendors knowing what I > am interested in & they respond with quotes for the same. I see those quotes > on my screen as a small window, which updates itself every 15 seconds & I > can decide which vendor I want to talk to. > > I hope this sounds less scary that all vendors getting my email address. But > coming back to the point, is any side effect is GET request? > > What about a scenario, where system learns from my interactions with the > application & because of which it changes its response to suite my needs. So > any two consecutive requests, though identical, won't be idempotent. Should > such request be GET or POST or PUT? > > Regards, > Aakash Dharmadhikari > > > On 18/05/11 12:04 AM, mike amundsen wrote: >> >> FWIW, a good "rule of thumb" for determining if GET is an appropriate >> method is to imagine the results of a search bot (e.g. google, yahoo, >> bing, etc.) making a request to that link. >> >> for example, a list of links on a page that look like this: >> GET /my-profile/?action=send-my-email-address-to-everyone >> is something I would *not* want to allow the google-bot to execute >> >> This is true even if the link looked like this: >> GET /my-profile/ >> and the backend process resulted in sending my email address to everyone. >> >> mca >> http://amundsen.com/blog/ >> http://twitter.com@mamund >> http://mamund.com/foaf.rdf#me >> >> >> #RESTFest 2010 >> http://rest-fest.googlecode.com >> >> >> >> >> On Tue, May 17, 2011 at 14:29, aakash dharmadhikari<aakashd@...> >> wrote: >>> >>> Point taken Norman. The confusion about the GET& POST was caused >>> because of the RFC itself. >>> >>> Broaching a bigger problem, I see that HTTP verbs give more importance >>> to the users intentions& REST talks continuously about server state. >>> But when I use HTTP as a carrier for REST requests, I see these >>> conflicts all the time. >>> >>> Is there a rule of thumb, as to REST always overrides HTTP or the other >>> way around? >>> >>> About the business scene, user is aware of the sites functioning& >>> vendor contacting him is a desired& explicitly communicated effect of >>> the search. >>> >>> Regards, >>> Aakash Dharmadhikari >>> >>> >>> On 17/05/11 7:16 PM, Norman Gray wrote: >>>> >>>> Aakash, hello. >>>> >>>> On 2011 May 16, at 20:09, aakash dharmadhikari wrote: >>>> >>>>> 1. GET or POST >>>>> >>>>> User searches for vendors (plumbers) in a given area& sees list of >>>>> vendors on page. As a requirement, vendors get notified that a user has >>>>> found him for the search term, so that he can contact the customer. >>>>> >>>>> The intention of the API user will be to get vendors, so it seems to be >>>>> a pure GET contender; but the unintended side effect is creation of these >>>>> notifications. As per the specification, HTTP verb should indicate the >>>>> intent of the user, but in this case there is a pure object creation. What >>>>> should we use, POST or GET? >>>> >>>> That sounds like you should GET. >>>> >>>> RFC 2616 section 9.1.1 says of 'idempotent' methods such as GET: >>>> >>>>> The important >>>>> distinction here is that the user did not request the side-effects, >>>>> so therefore cannot be held accountable for them. >>>> >>>> In other words, if your service has server-side side-effects, the HTTP >>>> spec says that's your problem, not the user's. >>>> >>>> Just by the way, if I as a customer were contacted by a merchant when >>>> all I'd done was look at their address, I think I'd react poorly. At >>>> length, and with gestures. But I presume you know your users. >>>> >>>> Best wishes, >>>> >>>> Norman >>>> >>>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >>> >
I think in my failed attempt to address two issues, I created more confusion that clarity. 1. Unfortunately, due to clients NDA, I can't explain complete work flow which caused the misunderstanding. From overall mails I saw people digressing from core problem due to the "email sharing" assumption. In order to get an unbiased opinion I wanted to pose a different use case. That's where the "non scary" part come in. 2. I have gone through the RFC2616 & some resources for REST; just to find that HTTP talks about user's intention & REST talks about server state. When we use both of them together, there is a lot of confusion about which prevails over the other. 3. About the last hypothetical scenario of search, should a fetch request with an unintentional side effect be treated as PUT or POST or PARTIAL? Regards, Aakash Dharmadhikari On 18/05/11 12:23 AM, mike amundsen wrote: > Aakash: > > I'm not clear on what you are asking here. > > The only point I was making was to give you a "mental exercise" for > making your own decision on what you want your implementation to do. I > think you can use the example I gave to view your implementation from > both the "user" and "server implementor" point of view. That, I think > is the approach you should take. > > As for your example here, I can't really comment on what is "scary" or > "expected" from the users of your implementation. I suspect we all > have our own opinions of your description, but I doubt many of us have > a proper "context" with which to provide you helpful advice on this > particular example. > > From the transfer protocol point of view, users are not "responsible" > for any side effects occurring from a GET as the protocol clearly > states that GET should be treated as safe and idempotent. How you > choose to implement your server is up to you. As long as it honors > this small set of semantic expectations, you'll have met your > responsibilities as a server implementer. > > If it turns out most of your users are unhappy with the side effects > of using GET on a particular page, they may come to decide your > implementation is undesirable. But those users' opinions of your work > will not be due to any prose they find in RFC2616<g>. > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2010 > http://rest-fest.googlecode.com > > > > > On Tue, May 17, 2011 at 14:42, aakash dharmadhikari<aakashd@...> wrote: >> Just twisting the scenario a little bit. >> >> As a user I make a search request, which results in vendors knowing what I >> am interested in& they respond with quotes for the same. I see those quotes >> on my screen as a small window, which updates itself every 15 seconds& I >> can decide which vendor I want to talk to. >> >> I hope this sounds less scary that all vendors getting my email address. But >> coming back to the point, is any side effect is GET request? >> >> What about a scenario, where system learns from my interactions with the >> application& because of which it changes its response to suite my needs. So >> any two consecutive requests, though identical, won't be idempotent. Should >> such request be GET or POST or PUT? >> >> Regards, >> Aakash Dharmadhikari >> >> >> On 18/05/11 12:04 AM, mike amundsen wrote: >>> FWIW, a good "rule of thumb" for determining if GET is an appropriate >>> method is to imagine the results of a search bot (e.g. google, yahoo, >>> bing, etc.) making a request to that link. >>> >>> for example, a list of links on a page that look like this: >>> GET /my-profile/?action=send-my-email-address-to-everyone >>> is something I would *not* want to allow the google-bot to execute >>> >>> This is true even if the link looked like this: >>> GET /my-profile/ >>> and the backend process resulted in sending my email address to everyone. >>> >>> mca >>> http://amundsen.com/blog/ >>> http://twitter.com@mamund >>> http://mamund.com/foaf.rdf#me >>> >>> >>> #RESTFest 2010 >>> http://rest-fest.googlecode.com >>> >>> >>> >>> >>> On Tue, May 17, 2011 at 14:29, aakash dharmadhikari<aakashd@...> >>> wrote: >>>> Point taken Norman. The confusion about the GET& POST was caused >>>> because of the RFC itself. >>>> >>>> Broaching a bigger problem, I see that HTTP verbs give more importance >>>> to the users intentions& REST talks continuously about server state. >>>> But when I use HTTP as a carrier for REST requests, I see these >>>> conflicts all the time. >>>> >>>> Is there a rule of thumb, as to REST always overrides HTTP or the other >>>> way around? >>>> >>>> About the business scene, user is aware of the sites functioning& >>>> vendor contacting him is a desired& explicitly communicated effect of >>>> the search. >>>> >>>> Regards, >>>> Aakash Dharmadhikari >>>> >>>> >>>> On 17/05/11 7:16 PM, Norman Gray wrote: >>>>> Aakash, hello. >>>>> >>>>> On 2011 May 16, at 20:09, aakash dharmadhikari wrote: >>>>> >>>>>> 1. GET or POST >>>>>> >>>>>> User searches for vendors (plumbers) in a given area& sees list of >>>>>> vendors on page. As a requirement, vendors get notified that a user has >>>>>> found him for the search term, so that he can contact the customer. >>>>>> >>>>>> The intention of the API user will be to get vendors, so it seems to be >>>>>> a pure GET contender; but the unintended side effect is creation of these >>>>>> notifications. As per the specification, HTTP verb should indicate the >>>>>> intent of the user, but in this case there is a pure object creation. What >>>>>> should we use, POST or GET? >>>>> That sounds like you should GET. >>>>> >>>>> RFC 2616 section 9.1.1 says of 'idempotent' methods such as GET: >>>>> >>>>>> The important >>>>>> distinction here is that the user did not request the side-effects, >>>>>> so therefore cannot be held accountable for them. >>>>> In other words, if your service has server-side side-effects, the HTTP >>>>> spec says that's your problem, not the user's. >>>>> >>>>> Just by the way, if I as a customer were contacted by a merchant when >>>>> all I'd done was look at their address, I think I'd react poorly. At >>>>> length, and with gestures. But I presume you know your users. >>>>> >>>>> Best wishes, >>>>> >>>>> Norman >>>>> >>>>> >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>>>
On 2011-05-18 05:00, aakash dharmadhikari wrote: > 2. I have gone through the RFC2616& some resources for REST; just to > find that HTTP talks about user's intention& REST talks about server > state. When we use both of them together, there is a lot of confusion > about which prevails over the other. Prevail when? This is the same thing. If it's the user's intention to change server state, then it should be a method other than GET or HEAD. If it isn't the user's intention to change server state, then it shouldn't be. There's only a conflict when something happens counter to the user's intention, which is a good working definition of "bug".
On Tue, May 17, 2011 at 9:00 PM, aakash dharmadhikari <aakashd@...>wrote: > 2. I have gone through the RFC2616 & some resources for REST; just to > > find that HTTP talks about user's intention & REST talks about server > state. When we use both of them together, there is a lot of confusion > about which prevails over the other. > > There is no "conflict" here. REST is REST. HTTP is HTTP. REST != HTTP. There are many, unRESTful properties to HTTP and ways to use HTTP in an unRESTful manner. If you wish to build a REST system on top of HTTP, then you need to constrain your use of HTTP so that it stays within the boundaries of a REST architecture. Also, there are several examples of resources that use GET but do not return the same payload each time. GET /current_temperature GET /random_number GET /new_items The premise is that you are not using GET to change state. If state changes on the server, then that's the servers problem. That's an issue for the developer of the service, but it doesn't change the contract for the client. If GET /new_items returns a list of items and then you call it again and get an empty list (because the server implementation feels that since it served the items up once, then the items are no longer "new"), well that's the server's problem. Not the clients. You do GET /new_items, and you got all of the "new items". You didn't call GET /new_items to change the state of the items from "new" to "not new", you called it to get "new items" using whatever criteria the server set for "new-ness", in contrast to whatever your definition of "new-ness" is. It's the servers resource, not yours. It could give you the same or growing list all day long and reset abruptly at 12am. Now, you can question the wisdom of the server implementation, but that's a different discussion. Regards, Will Hartung (willh@...)
I have a URL like this: ws/savedCriteria that will return a different result depending on who is authenticated, which is not very RESTful. I would like to fix this in a backward compatible way. I'm not firm on the new URL - but say it's ws/users/{userid}/config/savedCriteria.
When an older client makes a request to ws/savedCriteria with an authenticated user = foouser, should I return a 301 with ws/users/foouser/config/savedCriteria? Every distinct user who requests ws/savedCriteria will be redirected to a different URL, so a "Moved Permanently" seems not quite right. However, 302 implies moved temporarily, which is also wrong. 303 looks technically correct since the RFC says, "The new URI is not a substitute reference for the originally requested resource. The 303 response MUST NOT be cached, but the response to the second (redirected) request might be cacheable." However, I have never seen 303 used this way.
Does anyone have any thoughts on this?
Thanks Will, In this approach, the GET request does not remain idempotent and it changes the state of the system (unintentional side effect); but that's perfectly fine as we are not only restricting certain HTTP rules even twisting others to fit the REST way. The resource here is new_items, by whatever definition server chooses to believe. This is pretty much what I thought as well, except I was not sure if this was the right way to look at things. Thanks again. Regards, Aakash Dharmadhikari http://c42.in/ On 18/05/11 11:24 PM, Will Hartung wrote: > > > > On Tue, May 17, 2011 at 9:00 PM, aakash dharmadhikari > <aakashd@... <mailto:aakashd@...>> wrote: > > 2. I have gone through the RFC2616 & some resources for REST; just to > > find that HTTP talks about user's intention & REST talks about server > state. When we use both of them together, there is a lot of confusion > about which prevails over the other. > > > There is no "conflict" here. REST is REST. HTTP is HTTP. REST != HTTP. > There are many, unRESTful properties to HTTP and ways to use HTTP in > an unRESTful manner. > > If you wish to build a REST system on top of HTTP, then you need to > constrain your use of HTTP so that it stays within the boundaries of a > REST architecture. > > Also, there are several examples of resources that use GET but do not > return the same payload each time. > > GET /current_temperature > GET /random_number > GET /new_items > > The premise is that you are not using GET to change state. If state > changes on the server, then that's the servers problem. That's an > issue for the developer of the service, but it doesn't change the > contract for the client. > > If GET /new_items returns a list of items and then you call it again > and get an empty list (because the server implementation feels that > since it served the items up once, then the items are no longer > "new"), well that's the server's problem. Not the clients. > > You do GET /new_items, and you got all of the "new items". You didn't > call GET /new_items to change the state of the items from "new" to > "not new", you called it to get "new items" using whatever criteria > the server set for "new-ness", in contrast to whatever your definition > of "new-ness" is. It's the servers resource, not yours. It could give > you the same or growing list all day long and reset abruptly at 12am. > > Now, you can question the wisdom of the server implementation, but > that's a different discussion. > > Regards, > > Will Hartung > (willh@... <mailto:willh@...>) > > > >
Web pages use the same url to server user specific content and don't cache.
Is that inherently non-RESTful?
-Solomon
On Wed, May 18, 2011 at 12:22 PM, jason_h_erickson
<jason@...>wrote:
>
>
> I have a URL like this: ws/savedCriteria that will return a different
> result depending on who is authenticated, which is not very RESTful. I would
> like to fix this in a backward compatible way. I'm not firm on the new URL -
> but say it's ws/users/{userid}/config/savedCriteria.
>
> When an older client makes a request to ws/savedCriteria with an
> authenticated user = foouser, should I return a 301 with
> ws/users/foouser/config/savedCriteria? Every distinct user who requests
> ws/savedCriteria will be redirected to a different URL, so a "Moved
> Permanently" seems not quite right. However, 302 implies moved temporarily,
> which is also wrong. 303 looks technically correct since the RFC says, "The
> new URI is not a substitute reference for the originally requested resource.
> The 303 response MUST NOT be cached, but the response to the second
> (redirected) request might be cacheable." However, I have never seen 303
> used this way.
>
> Does anyone have any thoughts on this?
>
>
>
This is specific to HTTP, but this seems like a good forum for it. I have a URI that is restricted to certain authenticated users. I an unauthenticated user attempted a GET, clearly it should respond with a 401. However, if an authenticated user attempts to GET, but that user is not permitted to access that resource, is that a 403? The spec says that "Authorization will not help and the request SHOULD NOT be repeated." If the user is already authenticated, it is true that retrying the request will not work, however, if the user tries to re-authenticate with different credentials, he would be allowed to GET the resource. So is there a cut and dry answer? If not, is there a widely accepted convention?
On May 21, 2011, at 1:12 AM, Jason Erickson wrote: > > > This is specific to HTTP, but this seems like a good forum for it. > > I have a URI that is restricted to certain authenticated users. I an unauthenticated user attempted a GET, clearly it should respond with a 401. However, if an authenticated user attempts to GET, but that user is not permitted to access that resource, is that a 403? The spec says that "Authorization will not help and the request SHOULD NOT be repeated." If the user is already authenticated, it is true that retrying the request will not work, however, if the user tries to re-authenticate with different credentials, he would be allowed to GET the resource. > No, because access to the resource is forbidden. If the user is authenticated already but is not authorized to access the resource, send a 401 - which will trigger, for example a browser, to show the login dialog. After entering the right credentials for a login that is authorized for this resource, the server will respond 200. Jan > So is there a cut and dry answer? If not, is there a widely accepted convention? > > > >
So I have a full REST service that has GET/POST/PUT/DELETE services. They are ALL accessible from HTTP/HTTPS. I want to restrict the POST/PUT/DELETE to just HTTPS (obviously...). Using Java (websphere) I can do a "isSecure()" and discover if the request is HTTPS or not, but what response code do I return for a HTTP POST?
You could return 405 (Method Not Allowed), with 'Allow: GET,HEAD', as technically the HTTP and HTTPS "versions" of a URL are distinct resources. Jon ........ Jon Moore Comcast Interactive Media From: markthegrea <mjuchems@...<mailto:mjuchems@...>> Date: Tue, 24 May 2011 14:01:49 +0000 To: <rest-discuss@yahoogroups.com<mailto:rest-discuss@yahoogroups.com>> Subject: [rest-discuss] Single POST service accepts both http and https So I have a full REST service that has GET/POST/PUT/DELETE services. They are ALL accessible from HTTP/HTTPS. I want to restrict the POST/PUT/DELETE to just HTTPS (obviously...). Using Java (websphere) I can do a "isSecure()" and discover if the request is HTTPS or not, but what response code do I return for a HTTP POST?
Hi,
Say you design a system serving HTTP URLs in the form:
http://example.com/documents/{documentId}
Say you want to generate such an URL to include it as a link in some representation you create.
Say that before being put in your URL, a documentId looks like, for example: "123:456" (quotes not included).
Question: do you percent encode the colon character of the documentId when generating the URL?
Notes: The colon character is part of the reserved characters in the general URI syntax. However, we don't always need to encode reserved characters. It depends on whether they have a special meaning for the actual scheme (here, http) in the actual component they are in. I'm not 100% sure whether this is actually the case in the given example... In doubt, I could percent encode it, but for readability I would rather not do it if it is not really needed.
Philippe
On Sat, May 21, 2011 at 07:55:05AM +0200, Jan Algermissen wrote: > > On May 21, 2011, at 1:12 AM, Jason Erickson wrote: > > > > > > > This is specific to HTTP, but this seems like a good forum for it. > > > > I have a URI that is restricted to certain authenticated users. I an unauthenticated user attempted a GET, clearly it should respond with a 401. However, if an authenticated user attempts to GET, but that user is not permitted to access that resource, is that a 403? The spec says that "Authorization will not help and the request SHOULD NOT be repeated." If the user is already authenticated, it is true that retrying the request will not work, however, if the user tries to re-authenticate with different credentials, he would be allowed to GET the resource. > > > > No, because access to the resource is forbidden. > > If the user is authenticated already but is not authorized to access the resource, send a 401 - which will trigger, for example a browser, to show the login dialog. > > After entering the right credentials for a login that is authorized for this resource, the server will respond 200. Hm, looking again at the HTTP spec, and in particular the definition of the 401 status code, I guess you could do it this way. But normally I do the following: * If a user is not yet authenticated, and attempts to access a protected resource, then the response is 401. * If a user attempts to authenticate, but provides bad or unrecognised credentials, then the response is 401 again. * If a user is authenticated, and attempts to access a resource which they do not have permission to access, send a 403 with an HTML entity explaining that they don't have permission. Usually, you don't want to trigger a login dialog in a browser unless the user got their username or password wrong. Triggering a login dialog as a way of saying "you don't have permission" is potentially confusing, I would have thought. Cheers, Alistair -- Alistair Miles Head of Epidemiological Informatics Centre for Genomics and Global Health <http://cggh.org> The Wellcome Trust Centre for Human Genetics Roosevelt Drive Oxford OX3 7BN United Kingdom Web: http://purl.org/net/aliman Email: alimanfoo@... Tel: +44 (0)1865 287669
Hi,
Someone posted a REST question on a blog post at
http://www.jacobian.org/writing/rest-wankery-question/ which I quote:
> Consider a simple photo storage service as an API. Users can only interact with the API if they’ve got an account. Let’s say authorization happens overHTTP Basic.
> Given that, would you use URIs like /photos and /photos/{id} (as a photo list and photo detail resource, respectively)? What’s weird about those URIs is that my /photos is a different list of photos than your /photos — in other words, the resource represented depends on the information in theAuthorization header.
> It seems like URIs like /people/{my-uid}/photos and /people/{my-uid}/photos/{photo-id} are more “pure.” But now that’s weird because only one single user ever has access to a given URI (e.g only user #7 gets to access the entire space under /people/7). And the information in the URI is redundant with the information in the Authorization header.
> I guess the question comes down to whether HTTP headers “should” be allowed to determine the resource returned.
> So which would you use? Why?
I'd say I'd favour using what he calls the 'pure' approach, but I'm no
expert and wanted to cross check with you folks.
Thanks,
Sidu.
http://c42.in
Is the photo ID unique across all photo resources or just unique
within each user's photo resources?
Shaunak
On Wed, May 25, 2011 at 1:54 PM, Sidu Ponnappa <lorddaemon@...> wrote:
> Hi,
> Someone posted a REST question on a blog post at
> http://www.jacobian.org/writing/rest-wankery-question/ which I quote:
>
>> Consider a simple photo storage service as an API. Users can only interact with the API if they’ve got an account. Let’s say authorization happens overHTTP Basic.
>> Given that, would you use URIs like /photos and /photos/{id} (as a photo list and photo detail resource, respectively)? What’s weird about those URIs is that my /photos is a different list of photos than your /photos — in other words, the resource represented depends on the information in theAuthorization header.
>> It seems like URIs like /people/{my-uid}/photos and /people/{my-uid}/photos/{photo-id} are more “pure.” But now that’s weird because only one single user ever has access to a given URI (e.g only user #7 gets to access the entire space under /people/7). And the information in the URI is redundant with the information in the Authorization header.
>> I guess the question comes down to whether HTTP headers “should” be allowed to determine the resource returned.
>> So which would you use? Why?
>
> I'd say I'd favour using what he calls the 'pure' approach, but I'm no
> expert and wanted to cross check with you folks.
>
> Thanks,
> Sidu.
> http://c42.in
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
--
"Now the hardness of this world slowly grinds your dreams away /
Makin' a fool's joke out of the promises we make" --- Bruce
Springsteen, "Blood Brothers"
I'm facing something similar with my rather hastily designed first draft of my API. If I had it to do again, I would have (to use your example domain) /photos/{id} and have /people/{my-uid}/photos return a collection of links. That way, the URI /photos/{id} uniquely identifies a unique resource and /people/{my-uid}/photos uniquely identifies a view on those resources.
That's what I would do if I was starting over, but I can't say that it works well from personal experience.
> On Wed, May 25, 2011 at 1:54 PM, Sidu Ponnappa <lorddaemon@...> wrote:
>> Hi,
>> Someone posted a REST question on a blog post at
>> http://www.jacobian.org/writing/rest-wankery-question/ which I quote:
>>
>>> Consider a simple photo storage service as an API. Users can only interact with the API if they’ve got an account. Let’s say authorization happens overHTTP Basic.
>>> Given that, would you use URIs like /photos and /photos/{id} (as a photo list and photo detail resource, respectively)? What’s weird about those URIs is that my /photos is a different list of photos than your /photos — in other words, the resource represented depends on the information in theAuthorization header.
>>> It seems like URIs like /people/{my-uid}/photos and /people/{my-uid}/photos/{photo-id} are more “pure.” But now that’s weird because only one single user ever has access to a given URI (e.g only user #7 gets to access the entire space under /people/7). And the information in the URI is redundant with the information in the Authorization header.
>>> I guess the question comes down to whether HTTP headers “should” be allowed to determine the resource returned.
>>> So which would you use? Why?
>>
>> I'd say I'd favour using what he calls the 'pure' approach, but I'm no
>> expert and wanted to cross check with you folks.
>>
>> Thanks,
>> Sidu.
>> http://c42.in
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>>
>
>
>
> --
> "Now the hardness of this world slowly grinds your dreams away /
> Makin' a fool's joke out of the promises we make" --- Bruce
> Springsteen, "Blood Brothers"
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
On Wed, May 25, 2011 at 1:54 PM, Sidu Ponnappa <lorddaemon@...> wrote: > I'd say I'd favour using what he calls the 'pure' approach, but I'm no > expert and wanted to cross check with you folks. I'd favor it as well, simply because the URL is the how the resource is identified. Authorization should be just that, authorization. GET /photos/1 If that's "my" photo, then I should be able to get it based on my credential. If someone else tries to get that photo, they would use their credential, and it would (ideally) be denied. Having URLs return different representations based on who is logged in is perfectly acceptable HTTP. We see that all the time with cookies and what not. But it pretty much eliminates a lot of the benefits, such as caching. I don't know if proxies cache different the same URLs based on the authentication header or not. I doubt it. Better to have unique IDs for resources, and keep those separate from actual authorization. Regards, Will Hartung (willh@...)
Sidu:
a couple things to consider:
URIs
flickr uses URIs that include the account name
(http://www.flickr.com/photos/mikeamundsen/)
picasaweb uses URIs that do not include the account name
(https://picasaweb.google.com/home)
This does not address security; just choices on URI "design" for the app.
Security
flickr assumes an open security model to start (you don't need to be
logged in to view photos)
picasaweb assumes a closed security model to start (you must log in to
see any photos)
FWIW, note the the URI design is not "tied" to the security model
(e.g. you might assume the URI design that includes account name would
be the "closed" security model...)
Sharing
flickr supports sharing photos by just sharing links to those photos.
picasaweb supports sharing photos by crafting unique URIs that contain
access data as parameters of the URI
Note that "base" URI is the same, but added auth data is included in
the shared link. This illustrates that HTTP headers are not the only
way to handle authorization.
Caching
Headers are definitely useful in determining caching. HTTP/1.1 has the
Vary[1] header for just such purposes.
It is quite easy to craft responses that are "the same" whether the
user is authenticated or not. IOW, you can craft a response that is
both _personalized_ (based on the authentication info) and
shared-cache-able. The common way to do this is to return a
representation that contains an example HTML framework (e.g. a few
DIVs, etc.) and a block of javascript that can determine the
personalization information from headers (auth, cookies, etc.) and
then send XmlHttpRequest calls to return user-specific content. It is
the script that determines the ultimate content seen by the user.
[1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.44
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2011 - Aug 18-20
http://restfest.org
On Wed, May 25, 2011 at 16:54, Sidu Ponnappa <lorddaemon@...> wrote:
> Hi,
> Someone posted a REST question on a blog post at
> http://www.jacobian.org/writing/rest-wankery-question/ which I quote:
>
>> Consider a simple photo storage service as an API. Users can only interact with the API if they’ve got an account. Let’s say authorization happens overHTTP Basic.
>> Given that, would you use URIs like /photos and /photos/{id} (as a photo list and photo detail resource, respectively)? What’s weird about those URIs is that my /photos is a different list of photos than your /photos — in other words, the resource represented depends on the information in theAuthorization header.
>> It seems like URIs like /people/{my-uid}/photos and /people/{my-uid}/photos/{photo-id} are more “pure.” But now that’s weird because only one single user ever has access to a given URI (e.g only user #7 gets to access the entire space under /people/7). And the information in the URI is redundant with the information in the Authorization header.
>> I guess the question comes down to whether HTTP headers “should” be allowed to determine the resource returned.
>> So which would you use? Why?
>
> I'd say I'd favour using what he calls the 'pure' approach, but I'm no
> expert and wanted to cross check with you folks.
>
> Thanks,
> Sidu.
> http://c42.in
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On Wed, May 25, 2011 at 2:20 PM, Will Hartung <willh@...> wrote: [snip] > Having URLs return different representations based on who is logged in > is perfectly acceptable HTTP. We see that all the time with cookies > and what not. But it pretty much eliminates a lot of the benefits, > such as caching. I don't know if proxies cache different the same URLs > based on the authentication header or not. I doubt it. > > Depends on your cache headers (and properly implemented caches, of course). If your responses include some HTTP header whose value depends on the authenticated user, you can include a "Vary" header to tell a cache to take the specified header into account when deciding whether an entry in the cache matches a subsequent request. The nitty gritty details are in Chapter 13 of the HTTP 1.1 spec < http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13>. Better to have unique IDs for resources, and keep those separate from > actual authorization. > > In practice, this is what I normally do. > Regards, > > Will Hartung > (willh@...) > > Craig
"Philippe Mougin" wrote: > > Question: do you percent encode the colon character of the documentId > when generating the URL? > REST doesn't care, beyond following RFC 3986. If the colon is always in the same place, then you could just leave it out of your URIs. -Eric
On 2011-05-21 01:12, Jason Erickson wrote: > This is specific to HTTP, but this seems like a good forum for it. > > > I have a URI that is restricted to certain authenticated users. I an > unauthenticated user attempted a GET, clearly it should respond with a > 401. However, if an /authenticated/ user attempts to GET, but that user > is not permitted to access that resource, is that a 403? The spec says > that "Authorization will not help and the request SHOULD NOT be > repeated." If the user is already authenticated, it is true that > retrying the request will not work, however, if the user tries to > re-authenticate with different credentials, he would be allowed to GET > the resource. > > So is there a cut and dry answer? If not, is there a widely accepted > convention? > ... See <http://trac.tools.ietf.org/wg/httpbis/trac/ticket/294> and <http://lists.w3.org/Archives/Public/ietf-http-wg/2011AprJun/0256.html>. Best regards, Julian
On 2011-05-25 16:11, Philippe Mougin wrote:
> Hi,
>
> Say you design a system serving HTTP URLs in the form:
>
> http://example.com/documents/{documentId}
>
> Say you want to generate such an URL to include it as a link in some
> representation you create.
>
> Say that before being put in your URL, a documentId looks like, for
> example: "123:456" (quotes not included).
>
> Question: do you percent encode the colon character of the documentId
> when generating the URL?
> ...
You don't have to, unless it's needed for disambiguation with the ":"
that delimits the scheme name.
See <http://greenbytes.de/tech/webdav/rfc3986.html#rfc.section.3.3>
Best regards, Julian
On Thu, May 26, 2011 at 12:35 AM, Julian Reschke <julian.reschke@...>wrote: > > > On 2011-05-21 01:12, Jason Erickson wrote: > > This is specific to HTTP, but this seems like a good forum for it. > > > > > > I have a URI that is restricted to certain authenticated users. I an > > unauthenticated user attempted a GET, clearly it should respond with a > > 401. However, if an /authenticated/ user attempts to GET, but that user > > is not permitted to access that resource, is that a 403? The spec says > > that "Authorization will not help and the request SHOULD NOT be > > repeated." If the user is already authenticated, it is true that > > retrying the request will not work, however, if the user tries to > > re-authenticate with different credentials, he would be allowed to GET > > the resource. > > > > So is there a cut and dry answer? If not, is there a widely accepted > > convention? > > ... > > Not necessarily cut and dried, but this works for me based on real app development experience. From the server perspective: * "I do not know who you are" --> 401 * "I know who you are but you are not allowed to do what you requested" --> 403 Craig > See <http://trac.tools.ietf.org/wg/httpbis/trac/ticket/294> and > <http://lists.w3.org/Archives/Public/ietf-http-wg/2011AprJun/0256.html>. > > Best regards, Julian > > >
On Wed, May 25, 2011 at 10:48 PM, Craig McClanahan <craigmcc@...>wrote: > > > On Wed, May 25, 2011 at 2:20 PM, Will Hartung <willh@...> wrote: > [snip] > >> Having URLs return different representations based on who is logged in >> is perfectly acceptable HTTP. We see that all the time with cookies >> and what not. But it pretty much eliminates a lot of the benefits, >> such as caching. I don't know if proxies cache different the same URLs >> based on the authentication header or not. I doubt it. >> >> Depends on your cache headers (and properly implemented caches, of > course). If your responses include some HTTP header whose value depends on > the authenticated user, you can include a "Vary" header to tell a cache to > take the specified header into account when deciding whether an entry in the > cache matches a subsequent request. > > The nitty gritty details are in Chapter 13 of the HTTP 1.1 spec < > http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13>. > The Vary mechanism works against the request headers, since they are intended to help intermediaries negotiate requests on behalf of an origin server, therefore varying against a header in the response won't work. This type of resource variation is bad practice, and will cause you no end of unnecessary pain. User specific resources are distinct by definition so treat them as such by giving them unique identifiers.. if doing this is a significant undertaking then your toolset is crap and needs changing. Cheers, Mike
Thanks for the inputs, folks. I'll link back to this conversation. Best, Sidu. http://c42.in On Thu, May 26, 2011 at 2:38 PM, Mike Kelly <mike@mykanjo.co.uk> wrote: > > > On Wed, May 25, 2011 at 10:48 PM, Craig McClanahan <craigmcc@...> > wrote: >> >> >> On Wed, May 25, 2011 at 2:20 PM, Will Hartung <willh@...> wrote: >> [snip] >>> >>> Having URLs return different representations based on who is logged in >>> is perfectly acceptable HTTP. We see that all the time with cookies >>> and what not. But it pretty much eliminates a lot of the benefits, >>> such as caching. I don't know if proxies cache different the same URLs >>> based on the authentication header or not. I doubt it. >>> >> Depends on your cache headers (and properly implemented caches, of >> course). If your responses include some HTTP header whose value depends on >> the authenticated user, you can include a "Vary" header to tell a cache to >> take the specified header into account when deciding whether an entry in the >> cache matches a subsequent request. >> The nitty gritty details are in Chapter 13 of the HTTP 1.1 spec >> <http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13>. > > The Vary mechanism works against the request headers, since they are > intended to help intermediaries negotiate requests on behalf of an origin > server, therefore varying against a header in the response won't work. > This type of resource variation is bad practice, and will cause you no end > of unnecessary pain. User specific resources are distinct by definition so > treat them as such by giving them unique identifiers.. if doing this is a > significant undertaking then your toolset is crap and needs changing. > Cheers, > Mike
Mike Kelly wrote: > > This type of resource variation is bad practice > I've never heard that before. > > and will cause you no end of unnecessary pain. > Not in my experience. Personalized representations don't require new resources, this is a design decision where what works for one application may not be the best for another application. You should avoid absolutist statements like: > > User specific resources are distinct by definition > No. They *could* be, but then again maybe not, and what's best depends on the application. You may prefer to design systems one way, fine, but let's not mislead folks into thinking there's only one way to skin this cat. -Eric
On Thu, May 26, 2011 at 8:48 PM, Eric J. Bowman <eric@...> wrote: > Mike Kelly wrote: >> >> and will cause you no end of unnecessary pain. >> > > Not in my experience. > > Personalized representations don't require new resources, Yes, in the same way that getting dressed doesn't actually require underpants that fit properly. > this is a > design decision where what works for one application may not be the > best for another application. > You can decide to wear underpants that are 4 sizes too small if you really want. I wouldn't recommend it though. >> >> User specific resources are distinct by definition >> > > No. They *could* be, but then again maybe not, and what's best depends > on the application. You may prefer to design systems one way, fine, > but let's not mislead folks into thinking there's only one way to skin > this cat. > Ok.. interested to hear about these circumstances in which your metaphorical uber-skimpies are the better choice, because at the moment it seems like a lot of potential discomfort all for a highly questionable aesthetic effect. Cheers, Mike
On May 25, 2011, at 2:20 PM, Will Hartung wrote: > Having URLs return different representations based on who is logged in > is perfectly acceptable HTTP. We see that all the time with cookies > and what not. But it pretty much eliminates a lot of the benefits, > such as caching. I don't know if proxies cache different the same URLs > based on the authentication header or not. I doubt it. Caches don't maintain infinite number of variations as such a practice usually leads to very poor cache hit ratio. This is not a case of right vs wrong, it is just inefficient from cache operability point of view. Subbu
Hi Sidu,
As others have mentioned, I think there is a false assumption here...
On Thu, May 26, 2011 at 02:24:46AM +0530, Sidu Ponnappa wrote:
> Hi,
> Someone posted a REST question on a blog post at
> http://www.jacobian.org/writing/rest-wankery-question/ which I quote:
>
> > Consider a simple photo storage service as an API. Users can only interact
> > with the API if they’ve got an account. Let’s say authorization happens
> > overHTTPÂ Basic. Given that, would you use URIs
> > like /photos and /photos/{id} (as a photo list and photo detail resource,
> > respectively)? What’s weird about those URIs is that my /photos is a
> > different list of photos than your /photos — in other words, the resource
> > represented depends on the information in theAuthorization header. It seems
> > like URIs
> > like /people/{my-uid}/photos and /people/{my-uid}/photos/{photo-id} are more
> > “pure.†But now that’s weird because only one single user ever has access to
> > a given URI (e.g only user #7 gets to access the entire space
> > under /people/7).
...this doesn't have to be true, of course. You could implement access control
using ACLs, for instance, and so you would not be restricting access based on
URLs.
How you design URIs and how you implement access control are two separate
considerations. Sometimes I find that restricting access based on the URL is
suitable, but often I find that's just too crude, and you have to move to a
different model of access control, e.g., ACLs.
Anyway, I'm sure the folks on this list would say, the structure of the URL is
irrelevant, it's the hypermedia (links, forms etc.) that get you there that are
important.
Cheers,
Alistair
> > And the information in the URI is redundant with the information in
> > the Authorization header. I guess the question comes down to
> > whether HTTP headers “should†be allowed to determine the resource returned.
> > So which would you use? Why?
>
> I'd say I'd favour using what he calls the 'pure' approach, but I'm no
> expert and wanted to cross check with you folks.
>
> Thanks,
> Sidu.
> http://c42.in
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
--
Alistair Miles
Head of Epidemiological Informatics
Centre for Genomics and Global Health <http://cggh.org>
The Wellcome Trust Centre for Human Genetics
Roosevelt Drive
Oxford
OX3 7BN
United Kingdom
Web: http://purl.org/net/aliman
Email: alimanfoo@...
Tel: +44 (0)1865 287669
Subbu Allamaraju wrote: > > Will Hartung wrote: > > > Having URLs return different representations based on who is logged > > in is perfectly acceptable HTTP. We see that all the time with > > cookies and what not. But it pretty much eliminates a lot of the > > benefits, such as caching. I don't know if proxies cache different > > the same URLs based on the authentication header or not. I doubt it. > > Caches don't maintain infinite number of variations as such a > practice usually leads to very poor cache hit ratio. This is not a > case of right vs wrong, it is just inefficient from cache operability > point of view. > It depends on the system. If the 'not logged in' use case accounts for a significant amount of traffic, the default response can be set to cache-control: public, while personalized responses (the 'logged in' use case) set cache-control: private. So Vary: Authorization should only result in one cached variant on public caches. I'm not seeing the caching downside to this approach, seeing as how I don't want to publicly cache personalized responses. -Eric
Agreed. On Jun 3, 2011, at 12:52 PM, Eric J. Bowman wrote: > Subbu Allamaraju wrote: >> >> Will Hartung wrote: >> >>> Having URLs return different representations based on who is logged >>> in is perfectly acceptable HTTP. We see that all the time with >>> cookies and what not. But it pretty much eliminates a lot of the >>> benefits, such as caching. I don't know if proxies cache different >>> the same URLs based on the authentication header or not. I doubt it. >> >> Caches don't maintain infinite number of variations as such a >> practice usually leads to very poor cache hit ratio. This is not a >> case of right vs wrong, it is just inefficient from cache operability >> point of view. >> > > It depends on the system. If the 'not logged in' use case accounts for > a significant amount of traffic, the default response can be set to > cache-control: public, while personalized responses (the 'logged in' > use case) set cache-control: private. > > So Vary: Authorization should only result in one cached variant on > public caches. I'm not seeing the caching downside to this approach, > seeing as how I don't want to publicly cache personalized responses. > > -Eric
I am somewhat new to the REST architectural style, so please go easy on
me.
I am having difficulty understanding how a machine agent would
understand the content of my payload when using a generic media type. I
have been reading @mamund's latest presentation
http://www.amundsen.com/talks/#more-rest
<http://www.amundsen.com/talks/#more-rest> on how ROT is bad, and have
lived the painful problems associated with changing elements in schemas
in the past. Problem is that I don't know how to create a "common
understanding of the payloads passed between" client and server and my
mind tends to go to the "how can the server successfully export its
private objects in a way that clients can see and use them". Need help
understanding this concept further.
My current thought process (please correct):
I would like to return a list of entities from a search. Atom could
provide the list, but an agent will not understand my content payload
without some a priori knowledge.
Lets say I have an entity "Person" with 3 elements: "name", "address",
and "age" with associated data types String, String, Int. In the past,
I would have created a schema that described this entity and passed this
information inside a Collection element. But this seems to cause
problems down the line. If I change this concept to something like:
<items>
<item href="http://example.com/5612315623156">
<data name="name">Roy</data>
<data name="address">12 Foobar St</data>
<data name="age">31</data>
</item>
...
Given I would be using a generic media type ...
How does the agent know it is a Person type?
How does the agent know what "name", "address", or "age" mean?
Would I provide a namespace to the schema describing this entity? A
link relation? I don't want to create "Object-Based Media Type ROT" by
creating "application/myperson+xml". I feel like I am missing something
here. Need help!!
> > "how can the server successfully export its private objects in a way > that clients can see and use them". > In REST, you're not exporting objects, but generic object interfaces. Any user-agent that understands your object interfaces (using ubiquitous media types, i.e. forms) can manipulate your objects whether it understands them or not. Making the (machine) user understand what the markup represents is a separate problem from REST, I suggest RDFa. -Eric
Thanks for the link to his slides. I'm trying to improve my understanding of REST in exactly this area myself lately. From my reading, Amundsen is NOT using a generic type (such as application/xml) - he is using a vendor specific but well documented type: application/vnd.phactor+xml. You can see the documentation here: http://amundsen.com/media-types/phactor/ I don't know to what extent this is a real world media type or a sample of what a good one would look like, but it is definitely a vendor-specific media type. To me, his slides are advocating for a looser coupling of clients and servers by defining the media type in terms of HOW you are going to relay information, not WHAT information you are sending/receiving. So, in the case of the maze example, every maze element is just a list of links that you could follow to another maze element. Rather than define it in terms of <maze><connectedNodes><maze id='m1'/><maze id='m2'/><maze id='m3'/></connectedNodes></maze> He defines it as <maze><collection href="http:/www.example/mazes"><link href="http://www.example.org/mazes/1" rel="maze" id="m1"/>...</collection></maze> Now, if you had a program that already understood the application/vnd.phactor+xml media type, it would be able to understand this quite well. Of course, if you want to do something particularly "mazy" with it, you would have to understand specifically the media type of application/vnd.amundsen.maze+xml, but there's no getting around that I think. That's the way I read it, but as I said, I'm trying to get my head around this stuff, too, so I'd be interested in others' thoughts on this just like you. On Jun 6, 2011, at 10:54 AM, albertwylde wrote: > I am somewhat new to the REST architectural style, so please go easy on me. > > I am having difficulty understanding how a machine agent would understand the content of my payload when using a generic media type. I have been reading @mamund's latest presentation http://www.amundsen.com/talks/#more-rest on how ROT is bad, and have lived the painful problems associated with changing elements in schemas in the past. Problem is that I don't know how to create a "common understanding of the payloads passed between" client and server and my mind tends to go to the "how can the server successfully export its private objects in a way that clients can see and use them". Need help understanding this concept further. > > My current thought process (please correct): > > I would like to return a list of entities from a search. Atom could provide the list, but an agent will not understand my content payload without some a priori knowledge. > > Lets say I have an entity "Person" with 3 elements: "name", "address", and "age" with associated data types String, String, Int. In the past, I would have created a schema that described this entity and passed this information inside a Collection element. But this seems to cause problems down the line. If I change this concept to something like: > > <items> > <item href="http://example.com/5612315623156"> > <data name="name">Roy</data> > <data name="address">12 Foobar St</data> > <data name="age">31</data> > </item> > ... > > Given I would be using a generic media type ... > How does the agent know it is a Person type? > How does the agent know what "name", "address", or "age" mean? > > Would I provide a namespace to the schema describing this entity? A link relation? I don't want to create "Object-Based Media Type ROT" by creating "application/myperson+xml". I feel like I am missing something here. Need help!! > >
At my company, we are trying to find RESTful solutions to replace business eventing techniques. I wanted to describe our approach and the thought process to get there and get feedback from the group. We've focussed on using atom and/or atompub for this, but I think the issues I'm about to describe would be the same if we rolled our own collection mechanism. Basically, we take some entity in our system, typically held in a table and when there is an insert or update (we only do logical deletes) we copy the record to an event table and add a event id for a new primary key populated with an ascending sequence number. We developed a media type to represent the entity, and we reuse it to represent entity events as well. We build our atom feeds out of the entity events, so that our atom entries publish changes to the underlying entities. This makes our atom entries immutable and our collections append-only. We realized pretty quickly that we needed a paging solution. Fortunately RFC 5005 describes several strategies to use (paged feeds and archived feeds). Both are based on using semantic links with named relations to navigate through the pages. The paged strategy from RFC 5005 uses "next", "previous", "first", and "last" links, though it doesn't imply how the entries are ordered. The archived strategy uses a "subscription" page for the most recent items, and archived pages that are navigated by "previous-archive", "next-archive", and "current", where "next-archive" moves to older entries. AtomPub gives the collection itself a resource, and essentially calls for a paged feed with the "first" page the newest entries. At first we tried to give clients a standardized URI template for the pages, but as we came to understand HATEOAS better, we realized the RESTful way was to have the clients treat the links as opaque and instead rely on the link relations. We didn't need to give clients a formula to understand for how to construct page links because we expect them to always unroll the linked list of pages by starting at the entry point and following link relations. I recall with fondness how this was a major "Aha" moment for me when I understood very clearly how much better this was than the SOAP style solution. However, our servers still had to come up with URI schemes that allow them to understand what was requested. People tried several mechanisms. The first try was the ?page=$pagenumber way. I think this basically fails (it works for labeling static archives only), because the meaning of "page=3" isn't stateless when your first page shows the most recent entries. There's a nice edge case where we view entries 1-100 on ?page=1 and if exactly 100 entries arrive as we stare at it, then when we resolve ?page=2, then surprise, we get the same entries again. Then people tried offset and pagesize parameters, but again the offset breaks when we do things the atom way and order entries in descending edit date order. The same pathological case shows how this fails too. Finally, we settled on the "mark method" for paging, where we construct a "next" page by using a ?before=$oldest parameter where $oldest is the unique key value for the oldest entry on the page we just showed. Similarly, "previous" pages use a ?after=$newest parameter where $newest is the newest entry on the page. The "first" case is the collection URI with no parameters. To construct the entry set, we build a SQL statements like: rel="next" link: SELECT * FROM entity_events WHERE event_id < $oldest ORDER BY event_id LIMIT $pagesize rel="previous" link: SELECT * FROM entity_events WHERE event_id > $newest ORDER BY event_id LIMIT $pagesize rel="first" link: SELECT * FROM entity_events ORDER BY event_id DESC LIMIT $pagesize As long as event_id is indexed, this SQL is very efficient. Some DBs use different syntax instead of LIMIT. For each row, we might go ahead and pre-calculate the representation fragment, and then we can build pages super fast. An interesting consequence of this mechanism: first->previous is actually useful, because events may arrive after the server sends the entry point representation. If none have, this will be an empty page with only the first link. This may seems strange compared to the traditional mechanisms of polling the fixed "first" URI of the collection's entry point. That still works with the same semantics. But repeated polling of previous links gives clients a way to get guaranteed in-order delivery. It complicates the server's caching picture somewhat, but it's O(1) to force the eviction and freshening of the URIs in the last $pagesize entries. By the way, if you don't support first->previous, a clever client can poll on first->next->previous until events arrive, it adds the previous link, and it's etag changes. So you can't stop clients from trying this game. With blogs, clients probably only care about "the latest" so polling latest changes is reasonable. Often in business eventing situations, we need to watch all resource state transitions and react to certain transitions, which might be skipped over if multiple updates happen in quick succession. I'm curious to hear what others have done. Feedback welcome.
Jason/Albert: I posted a blog entry today[1] that addresses some of your remarks (from my POV only, of course<g>). the tl;dnr version is here: - The point of the talk was to "jolt" attendees into thinking differently about common Web implementation details (e.g. type marshaling/object serialization) - The slides themselves are "prompts" for the presentation and are missing quite a few nuances/details (sorry, no recording is available). - Due to time constraints I left out example designs based on HTML (only used XML and JSON examples in the slides) - The bottom-line message of the talk was to create messages that include not just "what" (data), but also "how" (hypermedia). I'd be happy to discuss this further either here or at a list i recently started that is focused on Hypermedia [2]. You can email me directly, too, if you like. [1] http://amundsen.com/blog/archives/1100 <http://amundsen.com/blog/archives/1100>[2] https://groups.google.com/forum/#!forum/hypermedia-web mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2011 - Aug 18-20 http://restfest.org On Mon, Jun 6, 2011 at 15:47, Jason Erickson <jason@...> wrote: > > > Thanks for the link to his slides. I'm trying to improve my understanding of REST in exactly this area myself lately. > From my reading, Amundsen is NOT using a generic type (such as application/xml) - he is using a vendor specific but well documented type: application/vnd.phactor+xml. You can see the documentation here: http://amundsen.com/media-types/phactor/ > I don't know to what extent this is a real world media type or a sample of what a good one would look like, but it is definitely a vendor-specific media type. To me, his slides are advocating for a looser coupling of clients and servers by defining the media type in terms of HOW you are going to relay information, not WHAT information you are sending/receiving. > So, in the case of the maze example, every maze element is just a list of links that you could follow to another maze element. Rather than define it in terms of > <maze><connectedNodes><maze id='m1'/><maze id='m2'/><maze id='m3'/></connectedNodes></maze> > He defines it as > <maze><collection href="http:/www.example/mazes"><link href=" http://www.example.org/mazes/1" rel="maze" id="m1"/>...</collection></maze> > Now, if you had a program that already understood the application/vnd.phactor+xml media type, it would be able to understand this quite well. Of course, if you want to do something particularly "mazy" with it, you would have to understand specifically the media type of application/vnd.amundsen.maze+xml, but there's no getting around that I think. > That's the way I read it, but as I said, I'm trying to get my head around this stuff, too, so I'd be interested in others' thoughts on this just like you. > On Jun 6, 2011, at 10:54 AM, albertwylde wrote: > > > > I am somewhat new to the REST architectural style, so please go easy on me. > > I am having difficulty understanding how a machine agent would understand the content of my payload when using a generic media type. I have been reading @mamund's latest presentation http://www.amundsen.com/talks/#more-rest on how ROT is bad, and have lived the painful problems associated with changing elements in schemas in the past. Problem is that I don't know how to create a "common understanding of the payloads passed between" client and server and my mind tends to go to the "how can the server successfully export its private objects in a way that clients can see and use them". Need help understanding this concept further. > > My current thought process (please correct): > > I would like to return a list of entities from a search. Atom could provide the list, but an agent will not understand my content payload without some a priori knowledge. > > Lets say I have an entity "Person" with 3 elements: "name", "address", and "age" with associated data types String, String, Int. In the past, I would have created a schema that described this entity and passed this information inside a Collection element. But this seems to cause problems down the line. If I change this concept to something like: > > <items> > <item href="http://example.com/5612315623156"> > <data name="name">Roy</data> > <data name="address">12 Foobar St</data> > <data name="age">31</data> > </item> > ... > > Given I would be using a generic media type ... > How does the agent know it is a Person type? > How does the agent know what "name", "address", or "age" mean? > > Would I provide a namespace to the schema describing this entity? A link relation? I don't want to create "Object-Based Media Type ROT" by creating "application/myperson+xml". I feel like I am missing something here. Need help!! > > > > >
Bryan Taylor wrote:
>
> At my company, we are trying to find RESTful solutions to replace
> business eventing techniques. I wanted to describe our approach and
> the thought process to get there and get feedback from the group.
>
Bear in mind as I answer, I have different requirements; I acknowledge
up-front that what I have to say may not apply to your situation. But,
I do agree that dealing with pagination is a bitch and a half.
>
> We've focussed on using atom and/or atompub for this, but I think the
> issues I'm about to describe would be the same if we rolled our own
> collection mechanism.
>
Yes. In my case, the problem is synchronizing an HTML front-end's
pagination with the pagination of underlying Atom feeds, when presenting
in ascending order via HTML.
>
> However, our servers still had to come up with URI schemes that allow
> them to understand what was requested. People tried several
> mechanisms. The first try was the ?page=$pagenumber way. I think this
> basically fails (it works for labeling static archives only), because
> the meaning of "page=3" isn't stateless when your first page shows
> the most recent entries.
>
My way, from the page showing the most-recent entries, rel='first'
points to itself; @href for rel='last' ends in ?page=1 if there are
multiple pages, and may also be rel='prev' if there are only two pages.
The entry point doesn't have to be ?page=1, and there's nothing wrong
with ?page=1 returning 404 until it's created by necessity.
The link relations' meanings stay constant, but the target URIs change
over time. When dynamically updating new posts (as opposed to reloading
the page), HEAD requests are used in conjunction with eTag to keep the
UI synchronized with the server's pagination using the Link header.
Navigating back to the most-recent entries is as simple as following
rel='first' from any other page in the collection. The problem is rel=
'next|prev' -- if you're on ?page=1, but no ?page=2 existed when the
current page was generated, rel='next' points back to the latest-
entries page; or, from the latest-entries page, rel='prev' targets
?page=1 because the server hadn't minted ?page=2 when it was generated.
The solution is 307 redirects, point rel='next' at ?page=next and it
will work for any user-agent sending REFERER, with graceful degradation
targeting the latest-entries page for those user-agents which don't.
For rel='prev' subtract 1 from REFERER when ?page=prev is requested,
with the assumption that only the latest-entries page needs it (rel=
'prev' should target ?page=x in the HTML, but the server should be more
liberal in what it accepts), if REFERER is not sent.
>
> There's a nice edge case where we view entries 1-100 on ?page=1 and
> if exactly 100 entries arrive as we stare at it, then when we
> resolve ?page=2, then surprise, we get the same entries again.
>
That doesn't happen if you reverse the numbering of your pages in your
URIs, as I have. There's nothing in Atom / AtomPub constraining in
what order URIs are minted, only the order of entries. My problem
comes when deleting entries -- if it causes a page delete, then that
page's URI should go 410, or 307-redirect to the previous page (thus
keeping fragments intact). Once a page has its own rel='next' the
target of rel='prev' could be changed from ?page=prev to ?page={-1}.
>
> Then people tried offset and pagesize parameters, but again the offset
> breaks when we do things the atom way and order entries in descending
> edit date order. The same pathological case shows how this fails too.
>
My reason for not pursuing such parameters is degradation of cache-hit
ratio. Pick one or two paginations for the system -- this is one of
those "drawbacks" to REST which is really a reflection of the realities
of how cache retention algorithms work in the real world, and is better
than having to choose between no pagination or no public caching.
>
> Finally, we settled on the "mark method" for paging, where we
> construct a "next" page by using a ?before=$oldest parameter where
> $oldest is the unique key value for the oldest entry on the page we
> just showed...
>
Interesting, but my devil's-advocate question is, is it worth all that
trouble to make your toilets flush in the "proper, American direction"
(see Simpsons episode, "Bart vs. Australia")?
>
> But repeated polling of previous links gives clients a way to get
> guaranteed in-order delivery.
>
You may have already answered my last question, but I'd need to be more
familiar with your use-case to know.
-Eric
tl;dr YES!!! We do this. It's cool.
On Tue, Jun 7, 2011 at 6:55 AM, Bryan Taylor <bryan_w_taylor@...>wrote:
>
>
> This makes our atom entries immutable and our collections
> append-only.
>
A nice constraint to start with, for sure. At my company we have more or
less the same thing; immutable lists (except for the odd logical delete) of
things. Extremely large lists of things. (for logical deletes we're going
to add atom tombstones).
> The paged strategy from RFC 5005 uses "next", "previous", "first", and
> "last" links, though it doesn't imply how the entries are ordered.
>
True. They only extend the inherent order of the things in the collection.
So if they are "descending date" then next will naturally be descending
date.
> because the meaning of "page=3" isn't stateless
>
Exactly! Nail on the head. It might not be state per se, but I like to
think of it like that too.
page=3 means "whatever is 20 items from the top" and it changes over time.
I like the simile of a stack of index cards: You look at a single card
(say, number 17 from the top) and think "oh, I want the *next* card" do you
(a) flip your way down to (*n* + 1)=18 cards? or
(b) move your fingers down one from the card you're holding?
Obviously in the real world, (b). (a) highlights the problem if someone
else is messing with your deck of cards, e.g. a new item was added to the
top, and when you flip through (*n* + 1) = 18 cards you end up with the same
card...
Finally, we settled on the "mark method" for paging, where we construct a
> "next"
> page by using a ?before=$oldest parameter where $oldest is the unique key
> value
> for the oldest entry on the page we just showed. Similarly, "previous"
> pages use
> a ?after=$newest parameter where $newest is the newest entry on the page.
> The
> "first" case is the collection URI with no parameters.
>
There's prior art in this technique, although not that use AtomPub:
Twitter:
http://apiwiki.twitter.com/w/page/22554749/Twitter-REST-API-Method:-statuses%C2%A0friends—they
call it "cursor based pagination"
CouchDB: http://guide.couchdb.org/draft/recipes.html#fast—It has "startkey"
and "limit", where you typically ask for 11 items, and then use the ID of
the 11th item as the key for the "next" page.
Us. haha. No, really... at RESTFest 2010, I gave a talk on "Extremely
large lists" where I (IIRC) highlighted the benefits of cursor based
pagination. The slides are available http://mogsie.com/2011/lists/. The
talk works best with audio (see last slide).
> An interesting consequence of this mechanism: first->previous is actually
> useful, because events may arrive after the server sends the entry point
> representation. If none have, this will be an empty page with only the
> first
> link.
>
It's uncanny. We've done this too. We use this as a cheap steady state for
all of those clients listening for everything new. If you have 100s of
clients and they're all constantly polling what's new, getting the "first"
page by following previous links will take them to the _same_ empty page
right at the beginning of the list.
I've discussed this with various people, and I thought that it might be an
idea to standardize a link relation ("before-first") for this. If your list
extends in both directions then a corollary "after-last" link relation could
be useful. But I believe the way we use these links are within the
definition of the "previous" link relation...
> But repeated polling of previous links gives clients a way to
> get guaranteed in-order delivery.
>
And apparently that's such a hard problem to solve...
> It complicates the server's caching picture
> somewhat
>
But it increases the efficiency of intermediary caching. Here's how:
Let's say you have 100 items—the numbers 1 through 100—on 10 pages, in
descending order. Any client that's anywhere in the list will see the list
along the "old" boundaries of items 100, 90, 80, 70 etc. You add one item
to the top. Traditional wisdom with offset based pagination, all pages
change (unless you flip it around like Eric Bowman suggests). But with
cursor based pagination, all the pages are still valid (except perhaps the
"first" page). So if a client is looking at "page 2", or first->next (items
81–90), the previous link will still show a page containing 91–100. This
page will no longer be the first page. It will now sport a previous link to
a page containing only 101. However, if a *new* client appears, the "entry
point" of the list would be a page containing the 10 top items; 92–101.
We don't think this will be a big problem. You might think that a worst case
scenario, each item would be cached by an intermediary ten times, as
different clients walk through the list at different boundaries. However
(at least for us), the typical usage would be for clients to only want to
keep up-to-date, so very quickly any client would end up at the "steady
state" of the empty first->previous page. When a new item appears, they all
pounce on the page and go to the ->previous page once more.
> but it's O(1) to force the eviction and freshening of the URIs in the
> last $pagesize entries.
>
You don't need to. If you leave them in the cache, what bad thing can
happen?
Let's say in the above scenario, three items were added in slow succession,
allowing clients to "keep up" and populate some intermediary with five
one-page items: 101–101, 102–102 and 103–103. In the mean time, a client
that had been down for maintenance or offline wakes up and picks up from
where they left off (which was page 91–100). The cached previous page still
shows 101–101. Which links to a cached 102–102. Which links to a cached
page 103–103.
Everything is still in-order, guaranteed delivery, and blazingly fast (since
it's cached).
Often in business eventing situations, we need to
> watch all resource state transitions and react to certain transitions,
> which
> might be skipped over if multiple updates happen in quick succession.
>
Ok that's two of us. How many are needed for it to be a pattern? Anyone
else doing this?
> I'm curious to hear what others have done. Feedback welcome.
>
As you can imagine, I've thought about this a while, and it has quite a few
desirable properties:
* resilient—if a client goes down, it picks up where it left off by
bookmarking the last URI it saw. No matter how long the outage, the client
will see exactly the items in the right order. The same goes for if the
server is down; the client just sits tight waiting for it to work. It works
if they're both down, in any combination of down ;-)
* scalable—any number of clients can be at different places in the list.
* stateless—the server knows nothing about where its clients are in the
list. The state of the client is magically encoded in the URI.
* fast—the client gets the item as fast as it can poll.
* efficient—reasonably efficient, that is. The client only gets the items
that it hasn't seen yet, and sees each item only once.
* transparent—since the resources don't change over time, you can re-request
the URI and be relatively sure that it contains the same items (albeit
perhaps more items).
Sorry about the long post...
--
-mogsie-
** cross-posted ** I'm contemplating a working definition for a "Hypermedia Client." Here are my first attempts: 1 - "A Hypermedia Client supports advancing it's own application state based on application control information supplied in server responses." 2 - "A Hypermedia Client supports advancing application state by sending requests to servers based on application control information supplied in server responses." 3 - "A Hypermedia Client supports advancing application state by sending requests to servers based on application control information embedded within, or as a layer above, the presentation of information supplied in server responses." The germ of this definition is loosely based on a Fielding's description of "Distributed Hypermedia"[1] The point of this exercise is: 1) Is there a generally agreed definition? 2) Can a definition be useful in evaluating/analyzing existing implementations? (e.g. "Is 'this' a hypermedia client?") 3) Can a definition be useful in creating new implementations that "meet" the definition? (e.g. "Here is what you need to build a hypermedia client....") Any/all feedback is welcome. Possibly there is "prior art" here of which I am unaware; please point me to any reference material you may think useful. Maybe you've gone through a similar process and would like to send along your experiences. Thanks in advance. [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/web_arch_domain.htm#sec_4_1_3 mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2011 - Aug 18-20 http://restfest.org
mike amundsen wrote: > > I'm contemplating a working definition for a "Hypermedia Client." > Here are my first attempts: > OK, I'll bite. I doubt we disagree, but I think your wording leads the wrong direction; taken out of the context of the thesis, "the presence of application control information" may be easily misconstrued to also apply to javascript, when Roy meant it as linking/posting. > > 1 - "A Hypermedia Client supports advancing it's own application > state based on application control information supplied in server > responses." > Implies that the server controls the application state, first by using "client" instead of "user-agent". It's the user who chooses which application state to transition to; one option is a return to the previous state (back button), which isn't supplied in server responses. > > 2 - "A Hypermedia Client supports advancing application state by > sending requests to servers based on application control information > supplied in server responses." > User-agents are supposed to follow their noses, this sounds like servers leading user-agents about by the nose. What I mean is, the server can only supply state-transition *options* (control information), not state- transition *control* (control directives). It violates REST to use HTML meta-redirection. > > 3 - "A Hypermedia Client supports advancing application state by > sending requests to servers based on application control information > embedded within, or as a layer above, the presentation of information > supplied in server responses." > Still doesn't account for that pesky back button; nor does it account for state transitions derived from user input, IP address or REFERER header. Your wording makes it sound like the user-agent has knowledge of what application state will result from following a given transition. -Eric
I've now produced an initial draft of a spec for a minimalist, generic media type. Love to get some feedback + pointers on this Cheers, Mike ---------- Forwarded message ---------- From: Mike Kelly <mikekelly321@...> Date: Tue, Jun 14, 2011 at 5:01 PM Subject: Initial draft spec of HAL To: hal-discuss@... Hey, I published an initial draft for HAL here: http://stateless.co/hal_specification.html All thoughts/corrections/additions welcome! Cheers, Mike
Mike: Good to see this posted. I have just a few general comments/questions... 1) It might help to offer a number of examples of "valid" HAL representations (e.g. minimum valid response, typical, etc.) For example, AFAICT, <resource rel="self" href="..." /> is the minimal valid response, right? I assume responses that are nothing more than a set to resource and link elements is valid. However, your example shows a representation with serveral other elements (created_at, name, age, etc.). It might help to have some narrative that explains that any and all elements/attributes are legal as long as they do not conflict w/ the details of resource and link as you define here. 2) You show resources w/ embedded content (@rel="td:item", etc.). These also have an @href, sometimes an @type. Do you need any MUST/SHOULD/MAY text regarding the relationship between the resource found at the other end of the @href and the one embedded in the document? IOW, are they both the same resource? the same content? or can the embedded content be unrelated to, or a summary of, the content found when deferencing the @href? does the @type only apply to the embedded content, or does it also act as a hint or constraint on the @href? 3) Does is make sense to address Extensibility at all? IOW, can I (as a document author) extended the link and resource elements? can I modify the MUST/SHOULD/MAY status of any of the defined elements/attributes? 4) You might consider adding a "version='1.0'" to root element (in case the schema changes over time). 5) You use "Required" and "Optional" when describing the attributes of Link and Resource. Are these actually RFC2119 words (e.g. REQUIRED, OPTIONAL)? 6) The doc sez: "The @name attribute MUST NOT be used to identify elements within a HAL representation." It's not clear why (to me). Will this break something? it is really a MUST and not a SHOULD? 7) Should you refer to definitions of the valid values for the attributes @href, @rel, @name, and @type? For example most defs of @rel allow this to be a set of tokens separated by spaces. Is that true for this media type, too? 8) You mention URI template, but have no other references to it. Maybe add a link to the ID? As you can see, I am getting into pretty small 'nits' here. Hopefully my feedback is helpful, tho. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2011 - Aug 18-20 http://restfest.org On Tue, Jun 14, 2011 at 12:17, Mike Kelly <mike@...> wrote: > I've now produced an initial draft of a spec for a minimalist, generic > media type. > > Love to get some feedback + pointers on this > > Cheers, > Mike > > ---------- Forwarded message ---------- > From: Mike Kelly <mikekelly321@...> > Date: Tue, Jun 14, 2011 at 5:01 PM > Subject: Initial draft spec of HAL > To: hal-discuss@... > > > Hey, > I published an initial draft for HAL here: > http://stateless.co/hal_specification.html > > All thoughts/corrections/additions welcome! > Cheers, > Mike > > -- > You received this message because you are subscribed to the Google Groups > "Hypermedia Web" group. > To post to this group, send email to hypermedia-web@.... > To unsubscribe from this group, send email to > hypermedia-web+unsubscribe@.... > For more options, visit this group at > http://groups.google.com/group/hypermedia-web?hl=en. > >
Hey mike
Comments inline + I added the JSON variant and changed a bunch of stuff so
might be worth another look today!
On Tue, Jun 14, 2011 at 8:44 PM, mike amundsen <mamund@...> wrote:
> Mike:
>
> Good to see this posted.
>
> I have just a few general comments/questions...
>
> 1) It might help to offer a number of examples of "valid" HAL
> representations (e.g. minimum valid response, typical, etc.)
> For example, AFAICT, <resource rel="self" href="..." /> is the minimal
> valid response, right?
>
I dropped the rel="self" as it was basically redundant. Minimum valid
response is now <resource href="..." /> or { "@href": "..." }
http://stateless.co/hal_specification.html#minimum
> However, your example shows a representation with serveral other elements
> (created_at, name, age, etc.). It might help to have some narrative that
> explains that any and all elements/attributes are legal as long as they do
> not conflict w/ the details of resource and link as you define here.
>
Updated general description and constraints sections to include this
>
> 2) You show resources w/ embedded content (@rel="td:item", etc.). These
> also have an @href, sometimes an @type. Do you need any MUST/SHOULD/MAY text
> regarding the relationship between the resource found at the other end of
> the @href and the one embedded in the document? IOW, are they both the same
> resource? the same content? or can the embedded content be unrelated to, or
> a summary of, the content found when deferencing the @href?
>
Clarified under 'Resource Attributes' under @href definition
>
> 3) Does is make sense to address Extensibility at all? IOW, can I (as a
> document author) extended the link and resource elements? can I modify the
> MUST/SHOULD/MAY status of any of the defined elements/attributes?
>
Yes it does, thanks for reminding me:
http://stateless.co/hal_specification.html#extending
>
> 4) You might consider adding a "version='1.0'" to root element (in case the
> schema changes over time).
>
I'm not sold on versioning, and if I was going to do it I would probably
implement it as a parameter on the media type identifier rather than in the
entity body to keep it visible without introspection.
>
> 5) You use "Required" and "Optional" when describing the attributes of Link
> and Resource. Are these actually RFC2119 words (e.g. REQUIRED, OPTIONAL)?
>
Yep, thanks
>
> 6) The doc sez: "The @name attribute MUST NOT be used to identify elements
> within a HAL representation." It's not clear why (to me). Will this break
> something? it is really a MUST and not a SHOULD?
>
Fair enough, I changed it to avoid confusion. I used that wording to try and
make it obvious it is not intended to be used like @id is in HTML.
>
> 7) Should you refer to definitions of the valid values for the attributes
> @href, @rel, @name, and @type? For example most defs of @rel allow this to
> be a set of tokens separated by spaces. Is that true for this media type,
> too?
>
Sure, does anyone have any suggestions/preference on where to take these
from?
>
> 8) You mention URI template, but have no other references to it. Maybe add
> a link to the ID?
>
Done
>
> As you can see, I am getting into pretty small 'nits' here. Hopefully my
> feedback is helpful, tho.
>
>
Definitely, thanks a lot for your input.
Cheers,
Mike
>
> mca
> http://amundsen.com/blog/
> http://twitter.com@mamund
> http://mamund.com/foaf.rdf#me
>
>
> #RESTFest 2011 - Aug 18-20
> http://restfest.org
>
>
> On Tue, Jun 14, 2011 at 12:17, Mike Kelly <mike@...> wrote:
>
>> I've now produced an initial draft of a spec for a minimalist, generic
>> media type.
>>
>> Love to get some feedback + pointers on this
>>
>> Cheers,
>> Mike
>>
>> ---------- Forwarded message ----------
>> From: Mike Kelly <mikekelly321@...>
>> Date: Tue, Jun 14, 2011 at 5:01 PM
>> Subject: Initial draft spec of HAL
>> To: hal-discuss@...
>>
>>
>> Hey,
>> I published an initial draft for HAL here:
>> http://stateless.co/hal_specification.html
>>
>> All thoughts/corrections/additions welcome!
>> Cheers,
>> Mike
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Hypermedia Web" group.
>> To post to this group, send email to hypermedia-web@....
>> To unsubscribe from this group, send email to
>> hypermedia-web+unsubscribe@....
>> For more options, visit this group at
>> http://groups.google.com/group/hypermedia-web?hl=en.
>>
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Hypermedia Web" group.
> To post to this group, send email to hypermedia-web@....
> To unsubscribe from this group, send email to
> hypermedia-web+unsubscribe@....
> For more options, visit this group at
> http://groups.google.com/group/hypermedia-web?hl=en.
>
Interesting. Looks a lot like Atom, RDF, and even HTML if you squint a bit. I don't think that's a coincidence. Mark.
You're right, it's not. Hopefully it's unique and practical enough to avoid being pointless! The main drivers were simplicity and encouraging applications to be expressed in terms of link relations. Cheers, Mike On Wed, Jun 15, 2011 at 7:00 PM, Mark Baker <distobj@...> wrote: > Interesting. Looks a lot like Atom, RDF, and even HTML if you squint a > bit. I don't think that's a coincidence. > > Mark. >
I actually prefer the rel='self' as it makes @rel REQUIRED instead of usually required. I guess removing it simplifies the resource slightly at the expense of extra explaining in the HAL spec. For me, the less explanation required the better.
Also, you have a typo in your XML example - the td:attachment resource is closed twice.
On Jun 15, 2011, at 10:02 AM, Mike Kelly wrote:
>
> I dropped the rel="self" as it was basically redundant. Minimum valid response is now <resource href="..." /> or { "@href": "..." } http://stateless.co/hal_specification.html#minimum
On Wed, Jun 15, 2011 at 9:02 PM, Jason Erickson <jason@...>wrote:
> I actually prefer the rel='self' as it makes @rel REQUIRED instead of
> usually required. I guess removing it simplifies the resource slightly at
> the expense of extra explaining in the HAL spec. For me, the less
> explanation required the better.
>
I'm actually not too fussed either way, if there's a general consensus on
that let's just change it back
>
> Also, you have a typo in your XML example - the td:attachment resource is
> closed twice.
>
nice one, thanks :)
Cheers,
Mike
>
>
> On Jun 15, 2011, at 10:02 AM, Mike Kelly wrote:
>
>
> I dropped the rel="self" as it was basically redundant. Minimum valid
> response is now <resource href="..." /> or { "@href": "..." }
> http://stateless.co/hal_specification.html#minimum
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Hypermedia Web" group.
> To post to this group, send email to hypermedia-web@....
> To unsubscribe from this group, send email to
> hypermedia-web+unsubscribe@....
> For more options, visit this group at
> http://groups.google.com/group/hypermedia-web?hl=en.
>
I quite like the idea of in-band control because I like a developer to be able to look at a response and understand all of the transitions without necessarily having to go look at the documentation. However let's not let my personal preference overly prescribe how people use HAL.
It seems like your (proposed?) control element could be modified slightly to allow inline templating and then those that like to give lots of in-band information would be able to do so. So you could have something like:
<resource href="/" xmlns:ex="http://example.org/rels/">
<link rel="ex:basic" href="/bleh" />
<link rel="ex:search" href="/search_for;{searchTerm}" />
<control rel="ex:widgetate" href="/widget/{newID}" method="PUT"
content-type="application/x-www-form-urlencoded" template-type="application/x-www-form-urlencoded">
<input name="name" type="text" value="Already Existing Widget"/>
<select name="type">
<option value="ESSENTIAL" />
<option value="USEFUL" selected="true"/>
<option value="USELESS" />
</select>
</control>
<resource rel="ex:member" href="/foo">
<link rel="ex:created_by" href="/some_dude" />
<example>bar</example>
<resource rel="ex:status" href="/foo?status">
<status>disabled</status>
</resource>
</resource>
</resource>
Then, the in-band people like me are satisfied, but it's still generic enough to support whatever other content-types can be conceived of and of course, you can still have the @template to specify out-of-band if you prefer.
Note, in the above example, if the valid choices for type of widget is dynamic, then in-band is the only practical way (that I can think of) to express that in a template.
Also, while forms have the advantage of being well understood, if having your stuff in-band is really the goal and you'd rather use JSON (or some other format), then inlining your template would also allow:
<resource href="/" xmlns:ex="http://example.org/rels/">
<link rel="ex:basic" href="/bleh" />
<link rel="ex:search" href="/search_for;{searchTerm}" />
<control rel="ex:widgetate" href="/widget/{newID}" method="PUT"
content-type="application/json" >
<![CDATA[
{
"name":"{{name}}",
"type":"{{type}}"
}
]]>
</control>
<resource rel="ex:member" href="/foo">
<link rel="ex:created_by" href="/some_dude" />
<example>bar</example>
<resource rel="ex:status" href="/foo?status">
<status>disabled</status>
</resource>
</resource>
</resource>
On Jun 15, 2011, at 2:45 PM, Mike Kelly wrote:
>
> On Wed, Jun 15, 2011 at 10:11 PM, Solomon Duskis <sduskis@...> wrote:
> On Wed, Jun 15, 2011 at 3:51 PM, Mike Kelly <mike@...> wrote:
> I actually thought about adding form-like/templated write to HAL. The problem is that it adds quite a lot of complexity and I'm still not convinced it's terribly useful. So instead I'm going to create a separate media type which just extends it to add this capability. Here's an example of what that could look like : https://gist.github.com/893552
>
> Interesting... but wouldn't good ol' form data work for most <control> cases? Is there a way to KISS and have only marginal complexity addition?
>
> Well that wouldn't be generic, and would significantly limit the number of existing resources on the web that the requests could be directed at.
>
>
> Also; you could actually define restbucks as an application that's "just" driven by link relations and express that with HAL. The app would lose any in-band dynamism you would expect from the use of form-like controls though.
>
> It would require "just" link relations and "just" minor out-of-band information (mostly methods, some message structure data as well). Is that what you mean by losing "in-band dynamis?
>
> Yes, some people consider this application design lacking. Personally, I'm quite happy to use link relations this way - hence why form-like stuff is left out of HAL and will be introduced as part of a separate type.
>
> Cheers,
> Mike
>
> --
> You received this message because you are subscribed to the Google Groups "Hypermedia Web" group.
> To post to this group, send email to hypermedia-web@....
> To unsubscribe from this group, send email to hypermedia-web+unsubscribe@....
> For more options, visit this group at http://groups.google.com/group/hypermedia-web?hl=en.
I think what your proposing here is to bring the template/form 'in-line',
rather than 'in-band'. From my pov, both template (i.e. linked or in-line)
approaches bring the control in-band, as opposed to 'heavily typed links'
where the control is out-of-band [1].
Regardless, it definitely makes a lot of sense to provide the option for the
template itself to be placed in-line. Thanks for bringing that up.
One small gripe with your in-line example is the use of @template-type..
this should be a URI that identifies the template/form type you are using
i.e. in the first example its value should be something like "
http://www.w3.org/TR/html401/interact/forms"
[1] http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/
Cheers,
Mike
On Wed, Jun 15, 2011 at 11:54 PM, Jason Erickson <jason@...>wrote:
> I quite like the idea of in-band control because I like a developer to be
> able to look at a response and understand all of the transitions without
> necessarily having to go look at the documentation. However let's not let
> my personal preference overly prescribe how people use HAL.
>
> It seems like your (proposed?) control element could be modified slightly
> to allow inline templating and then those that like to give lots of in-band
> information would be able to do so. So you could have something like:
>
> <resource href="/" xmlns:ex="http://example.org/rels/">
> <link rel="ex:basic" href="/bleh" />
> <link rel="ex:search" href="/search_for;{searchTerm}" />
> <control rel="ex:widgetate" href="/widget/{newID}" method="PUT"
> content-type="application/x-www-form-urlencoded" template-type="application/x-www-form-urlencoded">
> <input name="name" type="text" value="Already Existing Widget"/>
> <select name="type">
> <option value="ESSENTIAL" />
> <option value="USEFUL" selected="true"/>
> <option value="USELESS" />
> </select>
> </control>
> <resource rel="ex:member" href="/foo">
> <link rel="ex:created_by" href="/some_dude" />
> <example>bar</example>
> <resource rel="ex:status" href="/foo?status">
> <status>disabled</status>
> </resource>
> </resource>
> </resource>
>
> Then, the in-band people like me are satisfied, but it's still generic
> enough to support whatever other content-types can be conceived of and of
> course, you can still have the @template to specify out-of-band if you
> prefer.
>
> Note, in the above example, if the valid choices for type of widget is
> dynamic, then in-band is the only practical way (that I can think of) to
> express that in a template.
>
> Also, while forms have the advantage of being well understood, if having
> your stuff in-band is really the goal and you'd rather use JSON (or some
> other format), then inlining your template would also allow:
>
> <resource href="/" xmlns:ex="http://example.org/rels/">
> <link rel="ex:basic" href="/bleh" />
> <link rel="ex:search" href="/search_for;{searchTerm}" />
> <control rel="ex:widgetate" href="/widget/{newID}" method="PUT"
> content-type="application/json" >
> <![CDATA[
> {
> "name":"{{name}}",
> "type":"{{type}}"
> }
> ]]>
> </control>
> <resource rel="ex:member" href="/foo">
> <link rel="ex:created_by" href="/some_dude" />
> <example>bar</example>
> <resource rel="ex:status" href="/foo?status">
> <status>disabled</status>
> </resource>
> </resource>
> </resource>
>
>
> On Jun 15, 2011, at 2:45 PM, Mike Kelly wrote:
>
>
> On Wed, Jun 15, 2011 at 10:11 PM, Solomon Duskis <sduskis@...>wrote:
>
>> On Wed, Jun 15, 2011 at 3:51 PM, Mike Kelly <mike@...> wrote:
>>
>>> I actually thought about adding form-like/templated write to HAL. The
>>> problem is that it adds quite a lot of complexity and I'm still not
>>> convinced it's terribly useful. So instead I'm going to create a separate
>>> media type which just extends it to add this capability. Here's an example
>>> of what that could look like : https://gist.github.com/893552
>>
>>
>> Interesting... but wouldn't good ol' form data work for most <control>
>> cases? Is there a way to KISS and have only marginal complexity addition?
>>
>
> Well that wouldn't be generic, and would significantly limit the number of
> existing resources on the web that the requests could be directed at.
>
>
>>
>>
>>> Also; you could actually define restbucks as an application that's "just"
>>> driven by link relations and express that with HAL. The app would lose any
>>> in-band dynamism you would expect from the use of form-like controls though.
>>>
>>
>> It would require "just" link relations and "just" minor out-of-band
>> information (mostly methods, some message structure data as well). Is that
>> what you mean by losing "in-band dynamis?
>>
>
> Yes, some people consider this application design lacking. Personally, I'm
> quite happy to use link relations this way - hence why form-like stuff is
> left out of HAL and will be introduced as part of a separate type.
>
> Cheers,
> Mike
>
> --
> You received this message because you are subscribed to the Google Groups
> "Hypermedia Web" group.
> To post to this group, send email to hypermedia-web@....
> To unsubscribe from this group, send email to
> hypermedia-web+unsubscribe@....
> For more options, visit this group at
> http://groups.google.com/group/hypermedia-web?hl=en.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Hypermedia Web" group.
> To post to this group, send email to hypermedia-web@....
> To unsubscribe from this group, send email to
> hypermedia-web+unsubscribe@....
> For more options, visit this group at
> http://groups.google.com/group/hypermedia-web?hl=en.
>
As it stands, the plan is still to create a separate media type in which to
add all this templated write stuff.
Cheers,
Mike
On Thu, Jun 16, 2011 at 11:55 AM, Mike Kelly <mike@...> wrote:
> I think what your proposing here is to bring the template/form 'in-line',
> rather than 'in-band'. From my pov, both template (i.e. linked or in-line)
> approaches bring the control in-band, as opposed to 'heavily typed links'
> where the control is out-of-band [1].
>
> Regardless, it definitely makes a lot of sense to provide the option for
> the template itself to be placed in-line. Thanks for bringing that up.
>
> One small gripe with your in-line example is the use of @template-type..
> this should be a URI that identifies the template/form type you are using
> i.e. in the first example its value should be something like "
> http://www.w3.org/TR/html401/interact/forms"
>
> [1] http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/
>
> Cheers,
> Mike
>
>
> On Wed, Jun 15, 2011 at 11:54 PM, Jason Erickson <jason@...>wrote:
>
>> I quite like the idea of in-band control because I like a developer to be
>> able to look at a response and understand all of the transitions without
>> necessarily having to go look at the documentation. However let's not let
>> my personal preference overly prescribe how people use HAL.
>>
>> It seems like your (proposed?) control element could be modified slightly
>> to allow inline templating and then those that like to give lots of in-band
>> information would be able to do so. So you could have something like:
>>
>> <resource href="/" xmlns:ex="http://example.org/rels/">
>> <link rel="ex:basic" href="/bleh" />
>> <link rel="ex:search" href="/search_for;{searchTerm}" />
>> <control rel="ex:widgetate" href="/widget/{newID}" method="PUT"
>> content-type="application/x-www-form-urlencoded" template-type="application/x-www-form-urlencoded">
>> <input name="name" type="text" value="Already Existing Widget"/>
>> <select name="type">
>> <option value="ESSENTIAL" />
>> <option value="USEFUL" selected="true"/>
>> <option value="USELESS" />
>> </select>
>> </control>
>> <resource rel="ex:member" href="/foo">
>> <link rel="ex:created_by" href="/some_dude" />
>> <example>bar</example>
>> <resource rel="ex:status" href="/foo?status">
>> <status>disabled</status>
>> </resource>
>> </resource>
>> </resource>
>>
>> Then, the in-band people like me are satisfied, but it's still generic
>> enough to support whatever other content-types can be conceived of and of
>> course, you can still have the @template to specify out-of-band if you
>> prefer.
>>
>> Note, in the above example, if the valid choices for type of widget is
>> dynamic, then in-band is the only practical way (that I can think of) to
>> express that in a template.
>>
>> Also, while forms have the advantage of being well understood, if having
>> your stuff in-band is really the goal and you'd rather use JSON (or some
>> other format), then inlining your template would also allow:
>>
>> <resource href="/" xmlns:ex="http://example.org/rels/">
>> <link rel="ex:basic" href="/bleh" />
>> <link rel="ex:search" href="/search_for;{searchTerm}" />
>> <control rel="ex:widgetate" href="/widget/{newID}" method="PUT"
>> content-type="application/json" >
>> <![CDATA[
>> {
>> "name":"{{name}}",
>> "type":"{{type}}"
>> }
>> ]]>
>> </control>
>> <resource rel="ex:member" href="/foo">
>> <link rel="ex:created_by" href="/some_dude" />
>> <example>bar</example>
>> <resource rel="ex:status" href="/foo?status">
>> <status>disabled</status>
>> </resource>
>> </resource>
>> </resource>
>>
>>
>> On Jun 15, 2011, at 2:45 PM, Mike Kelly wrote:
>>
>>
>> On Wed, Jun 15, 2011 at 10:11 PM, Solomon Duskis <sduskis@...>wrote:
>>
>>> On Wed, Jun 15, 2011 at 3:51 PM, Mike Kelly <mike@...> wrote:
>>>
>>>> I actually thought about adding form-like/templated write to HAL. The
>>>> problem is that it adds quite a lot of complexity and I'm still not
>>>> convinced it's terribly useful. So instead I'm going to create a separate
>>>> media type which just extends it to add this capability. Here's an example
>>>> of what that could look like : https://gist.github.com/893552
>>>
>>>
>>> Interesting... but wouldn't good ol' form data work for most <control>
>>> cases? Is there a way to KISS and have only marginal complexity addition?
>>>
>>
>> Well that wouldn't be generic, and would significantly limit the number of
>> existing resources on the web that the requests could be directed at.
>>
>>
>>>
>>>
>>>> Also; you could actually define restbucks as an application that's
>>>> "just" driven by link relations and express that with HAL. The app would
>>>> lose any in-band dynamism you would expect from the use of form-like
>>>> controls though.
>>>>
>>>
>>> It would require "just" link relations and "just" minor out-of-band
>>> information (mostly methods, some message structure data as well). Is that
>>> what you mean by losing "in-band dynamis?
>>>
>>
>> Yes, some people consider this application design lacking. Personally, I'm
>> quite happy to use link relations this way - hence why form-like stuff is
>> left out of HAL and will be introduced as part of a separate type.
>>
>> Cheers,
>> Mike
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Hypermedia Web" group.
>> To post to this group, send email to hypermedia-web@....
>> To unsubscribe from this group, send email to
>> hypermedia-web+unsubscribe@....
>> For more options, visit this group at
>> http://groups.google.com/group/hypermedia-web?hl=en.
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Hypermedia Web" group.
>> To post to this group, send email to hypermedia-web@....
>> To unsubscribe from this group, send email to
>> hypermedia-web+unsubscribe@....
>> For more options, visit this group at
>> http://groups.google.com/group/hypermedia-web?hl=en.
>>
>
>
You are right, I meant in-line. And yes, the @template-type should be a URI - that makes more sense. On Jun 16, 2011, at 3:55 AM, Mike Kelly wrote: > I think what your proposing here is to bring the template/form 'in-line', rather than 'in-band'. From my pov, both template (i.e. linked or in-line) approaches bring the control in-band, as opposed to 'heavily typed links' where the control is out-of-band [1]. > > Regardless, it definitely makes a lot of sense to provide the option for the template itself to be placed in-line. Thanks for bringing that up. > > One small gripe with your in-line example is the use of @template-type.. this should be a URI that identifies the template/form type you are using i.e. in the first example its value should be something like "http://www.w3.org/TR/html401/interact/forms" > > [1] http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/ > > Cheers, > Mike > > On Wed, Jun 15, 2011 at 11:54 PM, Jason Erickson <jason@...> wrote: > I quite like the idea of in-band control because I like a developer to be able to look at a response and understand all of the transitions without necessarily having to go look at the documentation. However let's not let my personal preference overly prescribe how people use HAL.
Hello Mike, For one of our in-house RESTFul application, we started off defining a domain specific XML vocabulary but ended with a generic XML vocubulary to describe resources which looks a little similar to what you have proposed. One of the major goal of the vocabulary definition was to have the ability to describe all resources in uniform and consistent manner. Initially we started designing the vocabulary around the semantics of each resource and found out soon that it was harder to maintain. We then decided to make a generic "resource" vocabulary i.e. the vocabulary was aimed at to be applicable to a large set of resources as possible. The design considered that anything exposed by our application is a resource which can be linked and provides links to other relevant resources so we also included hypermedia controls in the vocabulary similar to what you have done. The following is how the vocabulary looks like: <resource href=*"xsd:anyURI"* rel=*"rels:rel"*/>* * * * * *<!-- Links to related resources -->** * *<refs>** * *<link rel=*"rels:rel"* href=*"xsd:anyURI"* />** * *</refs>** * * * *<!-- Navigational links if listing a large collection of resources --> ** * *<nav>** * *<link rel=*"rels:next"* href=*"xsd:anyURI"*/>** * *<link rel=*"rels:previous"* href=*"xsd:anyURI"*/>** * *<link rel=*"rels:first"* href=*"xsd:anyURI"*/>** * *<link rel=*"rels:last"* href=*"xsd:anyURI"*/>** * *</nav>** * * * *<!-- Include any number of resource properties here -->** * *<!-- Use XSD types -->** * *<property name=*"xsd:string"* type=*"xsd:nnn"*></property>** * *<property name=*"xsd:string"* type=*"xsd:nnn"*></property>** * * * *<!-- Use Custom types -->** * *<property name=*"xsd:string"* type=*"mytypes:YYY"*></property>** * * * *<!-- Include another resource -->** * *<property type=*"resource"*>** * *<resource href=*"xsd:anyURI"* rel=*"rels:rel"*>** * *<!-- Include any number of resource properties here -->** * *<property name=*"xsd:string"* type=*"xsd:nnn"*></property>** * *<property name=*"xsd:string"* type=*"xsd:nnn"*></property>* * * *</resource>** * *</property>** * * </resource>** An example using the above vocabulary is: <resource href=*"http://mycompany.com/books/fiction"* rel=*"rels:fiction"* > ** * * * *<property name=*"description"* type=*"xsd:string"*>* Browse Books * <property>** * * * *<refs>** * *<link href=*"http://abc.com/books/fiction/popular"* rel=*"rels:popular" */>** * *</refs>** * * * *<property type=*"resource"*>** * *<resource href=*"http://abc.com/books/HGTHG"* rel=*"rels:book"*>* * * *<refs>** * *<link href=*"http://abc.com/books/HGTHG_all"* rel=* "rels:related"*/>** * *</refs>** * * * *<property name=*"author"* type=*"xsd:string"*>*Douglas Adams* </property>** * *<property name=*"title"* type=*"xsd:string"*>*Hitch Hiker's Guide to*</property>* * * *<property name=*"price"* type=*"mytypes:amount"*>*$65* </property>* * * * * *</resource>** * *</property>** * * * *<property type=*"resource"*>** * *<resource href=*"http://abc.com/books/dirkgently"* rel=* "rels:book"*>* * * *<property name=*"author"* type=*"xsd:string"*>*Douglas Adams* </property>** * *<property name=*"title"* type=*"xsd:string"*>*Dirk Gently Series*</property>** * *<property name=*"price"* type=*"mytypes:amount"*>*$52* </property>* * * *</resource>** * *</property>** * * * *<property type=*"resource"*>** * *<resource href=*"http://abc.com/books/bonecollector"* rel=* "rels:book"*>* * * *<property name=*"author"* type=*"xsd:string"*>*Jeffrey Deaver* </property>** * *<property name=*"title"* type=*"xsd:string"*>*Bone Collector* </property>** * *<property name=*"price"* type=*"mytypes:amount"*>*$48* </property>* * * *</resource>** * *</property>** * * </resource>** If I compare the above with HAL, I feel that the above has the advantage of being more generic, XML schema needn't change when new property needs to be added etc and all resources will look same structurally. The drawbacks compared to HAL seems to me that it has more overhead since we have mixed type information with data and we can't have schema validation. The advantages of HAL over the above is that it is more lighter (no type info) and the XML schema can be validated. One drawback I see in HAL is that clients need to have prior knowledge of types. Appreciate if you could provide your views on the above vocabulary. Best regards, Suresh On Tue, Jun 14, 2011 at 9:47 PM, Mike Kelly <mike@...> wrote: > ** > > > I've now produced an initial draft of a spec for a minimalist, generic > media type. > > Love to get some feedback + pointers on this > > Cheers, > Mike > > ---------- Forwarded message ---------- > From: Mike Kelly <mikekelly321@...> > Date: Tue, Jun 14, 2011 at 5:01 PM > Subject: Initial draft spec of HAL > To: hal-discuss@googlegroups.com > > Hey, > I published an initial draft for HAL here: > http://stateless.co/hal_specification.html > > All thoughts/corrections/additions welcome! > Cheers, > Mike > > -- When the facts change, I change my mind. What do you do, sir?
ITAS #3 goes to Citigroup. I start getting e-mails from online providers that my recurring charges are being rejected for fraud; I check my account online and the page looks like it's been hacked there's so much red text, plus a notice that my card is cancelled and will be reissued, blah blah blah. So I call customer service and say yes, that's me when the rep speaks my company name as "bee-SOHN sees-TOHMs" which leads me to believe that this is the first Vietnamese call center I've encountered (fwiw), and assure the nice lady that there's no fraudulent activity on my account, and to please pay my suppliers, and please *not* issue me a new CC #. I've been moving, so I hadn't heard of recent events; apparently the response was to flag all online payments on my account (the only thing I use that account for) as fraud until they'd heard from me. But I digress -- this is about architecture, not lousy customer service: http://www.dailymail.co.uk/news/article-2003393/How-Citigroup-hackers-broke-door-using-banks-website.html "Law enforcement officials said the expertise behind the attack was a 'sign of what is likely to be a wave of more and more sophisticated breaches' by high-tech thieves." Oh, dear... we really are in trouble if law enforcement's that clueless. There was *no* expertise involved here. I've seen the account # in the URLs hundreds of times, I just always _assumed_ I was only logged in to my account not everyone's. Having that bad an architecture is just criminal. The lulzers? Not so much. I bet the black-hats who've been mining that hole for years, are plenty upset with them. Personally, I feel like an idiot for not bringing all my own http skills to bear on the Web interfaces for any account involving my money, instead of foolishly trusting the likes of Citigroup to be at least script-kiddie-proof. Since we're already dealing with https, what are the arguments against http auth again, aside from how it looks/works in browsers? The problem here wasn't the CC #'s in the URLs -- hash and salt them for the DB, sure, but don't expose that to a world that already knows how to format CC #'s; then you have an encapsulation layer instead of SQL- injection-via-URL. So I never saw the URI allocation scheme as a problem, and I still don't even though it was the vector of attack responsible for my own data being distributed freely on the Internet. The problem is bad architecture, which is all too common on roll-your-own cookie-based authentication schemes. Which is an argument in favor of not being able to style the good ol' butt-ugly browser login boxes. At least when I'm dealing with those, I know at a glance that any security holes the site has are probably just misconfigurations which may be fixed by anyone knowledgeable of http -- as opposed to systemic flaws buried in custom algorithms, which will take "years to fix," in the words of the runner-up to this ITAS, Sony... -Eric (wearing disgruntled-Citi-customer hat)
In RESTful web services cookbook recipe 9.3, Subbu recommends against having clients endpoints implement caching based on expiration headers. Instead he recommends using a forward proxy. His reasoning appears to be that correct implementation of caching is non-trivial and that getting reuse of a client-side proxy is easier, safer, and less risky. I'm wondering if others agree with his views. Are there any client side libraries that handle the trickiness for you? Is "it's hard to get right" a good enough reason.
In RESTful web services cookbook recipe 9.3, Subbu recommends against having clients endpoints implement caching based on expiration headers. Instead he recommends using a forward proxy. His reasoning appears to be that correct implementation of caching is non-trivial and that getting reuse of a client-side proxy is easier, safer, and less risky. I'm wondering if others agree with his views. Are there any client side libraries that handle the trickiness for you? Is "it's hard to get right" a good enough reason.
The Apache HttpComponents project includes a client-side cache with multiple backing stores (memory, ehcache, memcached) as of version 4.1. See http://hc.apache.org/httpcomponents-client-ga/httpclient-cache/index.html As one of the authors of the caching module, I agree with the assertion that getting this right is non-trivial (I think we have close to 1000 unit tests to line up requirements from RFC2616 with the implementation), so I would weigh the software investment against the operational overhead of running/scaling/maintaining the forward proxy cache. Jon ________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of bryan_w_taylor [bryan_w_taylor@...] Sent: Friday, July 01, 2011 2:10 AM To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Clients & Expiration based caching In RESTful web services cookbook recipe 9.3, Subbu recommends against having clients endpoints implement caching based on expiration headers. Instead he recommends using a forward proxy. His reasoning appears to be that correct implementation of caching is non-trivial and that getting reuse of a client-side proxy is easier, safer, and less risky. I'm wondering if others agree with his views. Are there any client side libraries that handle the trickiness for you? Is "it's hard to get right" a good enough reason.
There was a thread on this here back in January[1]: http://tech.dir.groups.yahoo.com/group/rest-discuss/message/17219 As mentioned in that thread, for the Windows platform, a single line of code adds very good client-side caching support[2]: *request.CachePolicy = new HttpRequestCachePolicy(HttpRequestCacheLevel.Default);* Options for other platforms are mentioned in that thread, too. [1] http://tech.dir.groups.yahoo.com/group/rest-discuss/message/17219 [2] http://msdn.microsoft.com/en-us/library/system.net.webrequest.cachepolicy.aspx mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2011 - Aug 18-20 http://restfest.org On Fri, Jul 1, 2011 at 09:38, Moore, Jonathan (CIM) < jonathan_moore@...> wrote: > > > The Apache HttpComponents project includes a client-side cache with > multiple backing stores (memory, ehcache, memcached) as of version 4.1. > > See > http://hc.apache.org/httpcomponents-client-ga/httpclient-cache/index.html > > As one of the authors of the caching module, I agree with the assertion > that getting this right is non-trivial (I think we have close to 1000 unit > tests to line up requirements from RFC2616 with the implementation), so I > would weigh the software investment against the operational overhead of > running/scaling/maintaining the forward proxy cache. > > Jon > ------------------------------ > *From:* rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on > behalf of bryan_w_taylor [bryan_w_taylor@...] > *Sent:* Friday, July 01, 2011 2:10 AM > *To:* rest-discuss@yahoogroups.com > *Subject:* [rest-discuss] Clients & Expiration based caching > > > > In RESTful web services cookbook recipe 9.3, Subbu recommends against > having clients endpoints implement caching based on expiration headers. > Instead he recommends using a forward proxy. His reasoning appears to be > that correct implementation of caching is non-trivial and that getting reuse > of a client-side proxy is easier, safer, and less risky. > > I'm wondering if others agree with his views. Are there any client side > libraries that handle the trickiness for you? Is "it's hard to get right" a > good enough reason. > > > > >
On Fri, Jul 1, 2011 at 12:10 AM, bryan_w_taylor <bryan_w_taylor@...> wrote: > I'm wondering if others agree with his views. Are there any client side libraries that > handle the trickiness for you? Is "it's hard to get right" a good enough reason. If you are in Ruby the resourceful gem[1] provides in-process caching. If you are working with public/non-sensitive data i think a caching proxy is an excellent choice. If, however, you are using HTTPS the caching proxy approach basically falls apart completely (afaik). [1]: http://github.com/paul/resourceful Peter
On Jul 1, 2011, at 12:27 PM, Peter Williams wrote: > you are using HTTPS the > caching proxy approach basically falls apart completely (afaik). Not necessarily the case. If you setup the forward proxy as the one starting TLS, then the proxy can cache. The same goes for the server side too where a reverse proxy cache can terminate TLS and serve cached representations. Subbu
Python's httplib2 has decent client side caching support IME.
On Sun, Jul 3, 2011 at 1:47 PM, Subbu Allamaraju <subbu@...> wrote: >> you are using HTTPS the >> caching proxy approach basically falls apart completely (afaik). > > Not necessarily the case. If you setup the forward proxy as the one > starting TLS, then the proxy can cache. The same goes for the server > side too where a reverse proxy cache can terminate TLS and serve > cached representations. Are there approaches to client side tls terminating reverse proxies that do not require the proxy to rewrite all the URIs in the response representations? Such rewriting might limit proxying's usefulness to systems based on media types with very wide reach. Peter
[sorry Erik, just found this in the moderation queue... Mark]
hello mike.
On 2011-06-14 9:17 , Mike Kelly wrote:
> I've now produced an initial draft of a spec for a minimalist, generic
> media type. Love to get some feedback + pointers on this
this looks very interesting! it has many similarities to something rosa
alarcon and i worked on that we called the "Resource Linking Language
(ReLL)", which probably had pretty much the same starting point.
however, one important difference may be that we see ReLL mostly as a
description language (even though we usually don't say that because this
is a seriously bad word in RESTful circles) that allows clients to
traverse across an interlinked set of resources, extracting links from
them through media-type specific selection languages (mostly XPath for
XML right now), and understanding those links based on link semantics
defined in ReLL. the main idea behind ReLL is that you can produce a
ReLL description of a set of interlinked resources without any need to
change them, so clients can start using ReLL without server/publishers
even knowing about it. we used that capability when we described web
pages via ReLL, and than had a ReLL-steered crawler harvesting RDF from
those pages, directed only by the declarative ReLL overlay on top of the
web pages.
we are still working on ReLL and unfortunately, there is no recent
publication (or let alone a specification) we could point you to, but
http://dret.net/netdret/publications#ala10a and
http://dret.net/netdret/publications#ala10c should be sufficient to
provide an overview of our approach and our language.
cheers,
dret.
--
erik wilde | mailto:dret@... - tel:+1-510-6432253 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
I have to wonder if this isn't simply duplicating work that already exists.
Topic maps have a data model more than adequate to do this sort of thing (on transit with a blackberry is not the place to verify that robustly) and its an international standard.
It is used most often in publishing and knowledge management but it is perfectly suited to a REST application.
The difficult bit is that it is VERY abstract (everything is either a topic, association or occurrence) which will cause trouble for some. It is also a data model (not a media type) with two (that I know of) representations (XTM and LTM).
I suggest that someone craft a sufficiently non-trivial problem and we all show how our solutions would apply.
Adam
Sent from my BlackBerry device on the Rogers Wireless Network
-----Original Message-----
From: Erik Wilde <dret@berkeley.edu>
Sender: rest-discuss@yahoogroups.com
Date: Tue, 21 Jun 2011 12:27:59
To: REST-Discuss<rest-discuss@yahoogroups.com>
Subject: [rest-discuss] HAL and ReLL
[sorry Erik, just found this in the moderation queue... Mark]
hello mike.
On 2011-06-14 9:17 , Mike Kelly wrote:
> I've now produced an initial draft of a spec for a minimalist, generic
> media type. Love to get some feedback + pointers on this
this looks very interesting! it has many similarities to something rosa
alarcon and i worked on that we called the "Resource Linking Language
(ReLL)", which probably had pretty much the same starting point.
however, one important difference may be that we see ReLL mostly as a
description language (even though we usually don't say that because this
is a seriously bad word in RESTful circles) that allows clients to
traverse across an interlinked set of resources, extracting links from
them through media-type specific selection languages (mostly XPath for
XML right now), and understanding those links based on link semantics
defined in ReLL. the main idea behind ReLL is that you can produce a
ReLL description of a set of interlinked resources without any need to
change them, so clients can start using ReLL without server/publishers
even knowing about it. we used that capability when we described web
pages via ReLL, and than had a ReLL-steered crawler harvesting RDF from
those pages, directed only by the declarative ReLL overlay on top of the
web pages.
we are still working on ReLL and unfortunately, there is no recent
publication (or let alone a specification) we could point you to, but
http://dret.net/netdret/publications#ala10a and
http://dret.net/netdret/publications#ala10c should be sufficient to
provide an overview of our approach and our language.
cheers,
dret.
--
erik wilde | mailto:dret@berkeley.edu - tel:+1-510-6432253 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
On Jul 3, 2011, at 7:35 PM, Peter Williams wrote: > On Sun, Jul 3, 2011 at 1:47 PM, Subbu Allamaraju <subbu@...> wrote: >>> you are using HTTPS the >>> caching proxy approach basically falls apart completely (afaik). >> >> Not necessarily the case. If you setup the forward proxy as the one >> starting TLS, then the proxy can cache. The same goes for the server >> side too where a reverse proxy cache can terminate TLS and serve >> cached representations. > > Are there approaches to client side tls terminating reverse proxies > that do not require the proxy to rewrite all the URIs in the response > representations? Not to my knowledge. Links in representations (non HTML cases) are still rare in the wild. Subbu
On Tue, Jul 5, 2011 at 11:18 PM, Subbu Allamaraju <subbu@...> wrote: > Not to my knowledge. Links in representations (non HTML cases) are still rare in the wild. In my world they are very common. I see them in RDF, JSON and XML all the time. Peter
On Jul 6, 2011, at 9:37 AM, Peter Williams wrote: > On Tue, Jul 5, 2011 at 11:18 PM, Subbu Allamaraju <subbu@...> wrote: >> Not to my knowledge. Links in representations (non HTML cases) are still rare in the wild. > > In my world they are very common. I see them in RDF, JSON and XML all the time. Quite possible. There is a quite a bit of gap in what hypertext driven apps expect and what off-the-shelf proxies/caches can do today. In one of my previous projects we did build a rewriting forward proxy to forward requests over TLS to origin servers, but the use case is not related to caching. But it can certainly be done with custom-built proxies or using proxy-specific plugin APIs. Subbu
Hi all, To indicate which representation of a resource was sent, the server can use the Content-Location header. E.g., if a client wants an HTML version in Spanish of /news/40, the server sends this and adds Content-Location: /news/40.es.html But how can we get to the original, unnegotiated URI from /news/40.es.html? The authors of "RESTful Web Services" argue in the errata [1] that there is no inverse: "There ought to be an HTTP method [sic] that is the opposite of Content-Location, but that's too big a project to undertake in this venue." Recently, Google started listening to rel="canonical" Link headers [2], which they propose for different content types. However, is this a) an acceptable "standard" solution and b) can this be used for other differentiations (language etc.) as well? What is the best way to point from a representation-specific URI to a representation-agnostic URI? Cheers, Ruben [1] http://oreilly.com/catalog/errata.csp?isbn=9780596529260 [2] http://googlewebmastercentral.blogspot.com/2011/06/supporting-relcanonical-http-headers.html
Ruben Verborgh wrote: > > To indicate which representation of a resource was sent, the server > can use the Content-Location header. E.g., if a client wants an HTML > version in Spanish of /news/40, the server sends this and adds > Content-Location: /news/40.es.html > > But how can we get to the original, unnegotiated URI > from /news/40.es.html? > By using 303 redirects and cookies... > > The authors of "RESTful Web Services" argue in the errata [1] that > there is no inverse: "There ought to be an HTTP method [sic] that is > the opposite of Content-Location, but that's too big a project to > undertake in this venue." > I don't know about that. Try my demo, date/title links and the View menu work: http://charger.bisonsystems.net/conneg/ If you disable cookies, the system still works but there's no override of conneg, so you may not get the variant you request; 303 makes no such promise, which is why I chose it. > > Recently, Google started listening to rel="canonical" Link headers > [2], which they propose for different content types. However, is this > a) an acceptable "standard" solution and b) can this be used for > other differentiations (language etc.) as well? > As to a), googlish benificence has already begat #!, making it a de- facto standard whether it's acceptable on technical merit or not. :-) As to b), I can't answer for how Google will interpret it, but I would say limit the semantics of rel='canonical' to the other case on [2]: "Another common situation in which rel='canonical' HTTP headers may help is when a website serves the same file from multiple URLs (for example when using a content distribution network) and the webmaster wishes to signal to Google the preferred URL." For the variants on my demo, which includes a polyglot document, which is canonical -- the XHTML served as text/html or that same XHTML served as application/xhtml+xml? For the case of pointing an alias which responds 200, to a canonical URL for that resource, I have no problem with rel='canonical' but see using it on conneg variants (based on 200, not redirects) as an overload. IOW, it makes sense for different-language variants, which are mostly implemented with 301 redirects on conneg, not 200 responses. > > What is the best way to point from a representation-specific URI to a > representation-agnostic URI? > After wrangling with this very problem for some time, I was able to write working code based around 303 redirects and application-state- agnostic cookies. Google figured out my demo just fine, before I created the robots.txt file. I don't know the "best" way, only the way which made sense to me and works in practice. -Eric
Hi Eric, > By using 303 redirects and cookies… That's certainly an interesting approach… > If you disable cookies, the system still works but there's no override > of conneg, so you may not get the variant you request; 303 makes no > such promise, which is why I chose it. …and I suppose one could interpret this as graceful degradation. Also, seen the content of the cookie (e.g., "type=xhtm; view=main") application state is also nicely kept on the client side. > I don't know the "best" way, only the way > which made sense to me and works in practice. Thanks very much for this detailed reply. Any other opinions on this? Does anybody know of other ways? Do you consider this missing inverse a hole in the spec, or can you see justifications for this? Cheers, Ruben
Dear ladies and sirs.
Please, forgive me for this long post, but I have to present the full
context, before actually asking my question. Whoever is going to read
this post through - thank you for your patience.
Our product consists of a server and multiple agents and we wish to
implement an agent upgrade service, so that agents can upgrade their
implementation silently without human intervention.
I would also like to make the upgrade process in one round-trip to the
service, otherwise I have to deal with issues like an agent getting
partial implementation and I really really do not want to allow it.
My current implementation uses overloaded POST method with the following
semantics:
* Agent periodically sends the server a mapping between the names of
the files constituting the current implementation and their respective
MD5 hash values. The mapping is represented in JSON.
* The service compares the snapshot received from the agent with the
local repository containing the most up-to-date agent implementation.
The service then composes the mapping between the names of the
modified/new files and their respective contents (zipped) and includes
it in the response. Deleted files are mapped with no content. The
response is represented in JSON again.
For instance, here is a sample request from an agent:
POST http://il-mark-lt/NC/AgentImplService/RepositoryDelta HTTP/1.1
Accept: application/json, application/xml, text/json, text/x-json,
text/javascript, text/xml, application/json
User-Agent: RestSharp 101.3.0.0
Content-Type: application/json
Host: il-mark-lt
Content-Length: 4013
Expect: 100-continue
Accept-Encoding: gzip, deflate
{
"Lib1.dll": "Ud/QDKYI8CPcWTxwBhfLjQ==",
"Lib2.dll": "2yjGP63c4PcdtkK+MQS+5g==",
"Lib3.dll": "1slfkwOzqfj/RDEYSKDE/Q==",
"Lib4.dll": "l7Q8q4Tn2P6DSPkZS7GMQA==",
}
If nothing has changed, then the service response would be:
HTTP/1.1 200 OK
Content-Length: 5
Content-Type: application/json
Server: Microsoft-HTTPAPI/2.0
Date: Thu, 07 Jul 2011 13:51:50 GMT
{}
Otherwise, if for instance, Lib2 was changed, Lib3 was deleted and Lib5
was added:
HTTP/1.1 200 OK
Content-Length: 591
Content-Type: application/json
Server: Microsoft-HTTPAPI/2.0
Date: Thu, 07 Jul 2011 13:59:16 GMT
{
"Lib2.dll": "H4sIAAAA........."
"Lib3.dll": null
"Lib5.dll": "OPUguh89........."
}
(I have omitted the actual contents for the sake of clarity)
There is a mechanism on the server side that prevents the service from
serving requests while the local repository is being updated, so
everything is fine on that end.
Another note to make is that it is highly unlikely that anyone else, but
our agents will ever use this service.
What I am interested to know is whether it is feasible to implement it
with a single GET request utilizing some clever conditional and/or
partial GET semantics?
Thank you very much.
What about using ETags? With ETags the client can send an if-none-match
header to do a conditional GET it will receive 304 Not Modified status as
long as the server files have not changed. If files have changed, the server
issues an ETag header along with the response which the client holds.
Ideally the server would then return the full file list in order to avoid
server resources. An alternative would be to store the deltas for each etag.
Then when a client request comes in you could determine based on the etah
which files have change. As long as only deltas are stored the storage cost
would be minimal.
Sent from my Windows Phone
------------------------------
From: mark69_fnd
Sent: Sunday, July 10, 2011 6:51 PM
To: rest-discuss@yahoogroups.com
Subject: [rest-discuss] Implementing implementation upgrade service
RESTfully.
Dear ladies and sirs.
Please, forgive me for this long post, but I have to present the full
context, before actually asking my question. Whoever is going to read this
post through - thank you for your patience.
Our product consists of a server and multiple agents and we wish to
implement an agent upgrade service, so that agents can upgrade their
implementation silently without human intervention.
I would also like to make the upgrade process in one round-trip to the
service, otherwise I have to deal with issues like an agent getting partial
implementation and I really really do not want to allow it.
My current implementation uses overloaded POST method with the following
semantics:
- Agent periodically sends the server a mapping between the names of the
files constituting the current implementation and their respective MD5 hash
values. The mapping is represented in JSON.
- The service compares the snapshot received from the agent with the
local repository containing the most up-to-date agent implementation. The
service then composes the mapping between the names of the modified/new
files and their respective contents (zipped) and includes it in the
response. Deleted files are mapped with no content. The response is
represented in JSON again.
For instance, here is a sample request from an agent:
POST http://il-mark-lt/NC/AgentImplService/RepositoryDelta HTTP/1.1
Accept: application/json, application/xml, text/json, text/x-json,
text/javascript, text/xml, application/json
User-Agent: RestSharp 101.3.0.0
Content-Type: application/json
Host: il-mark-lt
Content-Length: 4013
Expect: 100-continue
Accept-Encoding: gzip, deflate
{
"Lib1.dll": "Ud/QDKYI8CPcWTxwBhfLjQ==",
"Lib2.dll": "2yjGP63c4PcdtkK+MQS+5g==",
"Lib3.dll": "1slfkwOzqfj/RDEYSKDE/Q==",
"Lib4.dll": "l7Q8q4Tn2P6DSPkZS7GMQA==",
}
If nothing has changed, then the service response would be:
HTTP/1.1 200 OK
Content-Length: 5
Content-Type: application/json
Server: Microsoft-HTTPAPI/2.0
Date: Thu, 07 Jul 2011 13:51:50 GMT
{}
Otherwise, if for instance, Lib2 was changed, Lib3 was deleted and Lib5 was
added:
HTTP/1.1 200 OK
Content-Length: 591
Content-Type: application/json
Server: Microsoft-HTTPAPI/2.0
Date: Thu, 07 Jul 2011 13:59:16 GMT
{
"Lib2.dll": "H4sIAAAA........."
"Lib3.dll": null
"Lib5.dll": "OPUguh89........."
}
(I have omitted the actual contents for the sake of clarity)
There is a mechanism on the server side that prevents the service from
serving requests while the local repository is being updated, so everything
is fine on that end.
Another note to make is that it is highly unlikely that anyone else, but our
agents will ever use this service.
What I am interested to know is whether it is feasible to implement it with
a single GET request utilizing some clever conditional and/or partial GET
semantics?
Thank you very much.
1. I was thinking that a client receives an ETag that represents the current
state of the files on the server. The first time the client connects it
would not send an ETag thus it would receive all files along with an ETag
representing that snapshot. Once files change the etag would change. As the
client sent an original tag as part of the if-none-match the server can use
that to determine the delta of what the client has.
2. Not sure the uri scheme matters there could be anything. The important
thing is that the resource. The ETag does not specify the delta, it
specifies what version of files the client has at any point in time.
On Sun, Jul 10, 2011 at 9:21 PM, Mark Kharitonov
<mark.kharitonov@...>wrote:
> Since I do not want to find myself dealing with agents getting partial
> upgrades the whole upgrade must be done in a single round trip. I.e. one
> conditional GET statement - one response.
>
> Now this brings the following questions:
>
> 1. Do you mean to use multiples ETag values or encode the request
> file-hash mapping as a single ETag value which the server then unfolds into
> the original mapping?
> 2. What is the name of the resource exposed by the server? Should it be
> something like "delta" or like "snapshot" ?
>
> For example, should the client GET *
> http://bla-bla/agent-implementation/delta*
> or
> *http://bla-bla/agent-implementation/snapshot* ?
>
> If it is *delta*, then the delta specification is given in the etag,
> which seems a bit strange to me (I refrain from utilizing the URI query
> component for fearing to overflow it). But if it is a well known technique -
> I am fine with it.
>
> If it is *snapshot*, then one has to be aware that it is only a partial
> snapshot, because not all the files in the repository are returned. But,
> then where is it expressed that the snapshot is partial? Should I employ the
> partial GET semantics as well?
>
>
>
> On 11/07/2011, at 05:45, Glenn Block wrote:
>
> What about using ETags? With ETags the client can send an if-none-match
> header to do a conditional GET it will receive 304 Not Modified status as
> long as the server files have not changed. If files have changed, the server
> issues an ETag header along with the response which the client holds.
>
> Ideally the server would then return the full file list in order to avoid
> server resources. An alternative would be to store the deltas for each etag.
> Then when a client request comes in you could determine based on the etah
> which files have change. As long as only deltas are stored the storage cost
> would be minimal.
>
> Sent from my Windows Phone
> ------------------------------
> From: mark69_fnd
> Sent: Sunday, July 10, 2011 6:51 PM
> To: rest-discuss@yahoogroups.com
> Subject: [rest-discuss] Implementing implementation upgrade service
> RESTfully.
>
>
>
> Dear ladies and sirs.
>
> Please, forgive me for this long post, but I have to present the full
> context, before actually asking my question. Whoever is going to read this
> post through - thank you for your patience.
>
> Our product consists of a server and multiple agents and we wish to
> implement an agent upgrade service, so that agents can upgrade their
> implementation silently without human intervention.
>
> I would also like to make the upgrade process in one round-trip to the
> service, otherwise I have to deal with issues like an agent getting partial
> implementation and I really really do not want to allow it.
>
> My current implementation uses overloaded POST method with the following
> semantics:
>
> - Agent periodically sends the server a mapping between the names of
> the files constituting the current implementation and their respective MD5
> hash values. The mapping is represented in JSON.
> - The service compares the snapshot received from the agent with the
> local repository containing the most up-to-date agent implementation. The
> service then composes the mapping between the names of the modified/new
> files and their respective contents (zipped) and includes it in the
> response. Deleted files are mapped with no content. The response is
> represented in JSON again.
>
>
> For instance, here is a sample request from an agent:
>
> POST http://il-mark-lt/NC/AgentImplService/RepositoryDelta HTTP/1.1
> Accept: application/json, application/xml, text/json, text/x-json,
> text/javascript, text/xml, application/json
> User-Agent: RestSharp 101.3.0.0
> Content-Type: application/json
> Host: il-mark-lt
> Content-Length: 4013
> Expect: 100-continue
> Accept-Encoding: gzip, deflate
>
> {
> "Lib1.dll": "Ud/QDKYI8CPcWTxwBhfLjQ==",
> "Lib2.dll": "2yjGP63c4PcdtkK+MQS+5g==",
> "Lib3.dll": "1slfkwOzqfj/RDEYSKDE/Q==",
> "Lib4.dll": "l7Q8q4Tn2P6DSPkZS7GMQA==",
> }
>
> If nothing has changed, then the service response would be:
>
> HTTP/1.1 200 OK
> Content-Length: 5
> Content-Type: application/json
> Server: Microsoft-HTTPAPI/2.0
> Date: Thu, 07 Jul 2011 13:51:50 GMT
>
> {}
>
> Otherwise, if for instance, Lib2 was changed, Lib3 was deleted and Lib5 was
> added:
>
> HTTP/1.1 200 OK
> Content-Length: 591
> Content-Type: application/json
> Server: Microsoft-HTTPAPI/2.0
> Date: Thu, 07 Jul 2011 13:59:16 GMT
>
> {
> "Lib2.dll": "H4sIAAAA........."
> "Lib3.dll": null
> "Lib5.dll": "OPUguh89........."
> }
>
> (I have omitted the actual contents for the sake of clarity)
>
> There is a mechanism on the server side that prevents the service from
> serving requests while the local repository is being updated, so everything
> is fine on that end.
>
> Another note to make is that it is highly unlikely that anyone else, but
> our agents will ever use this service.
>
> What I am interested to know is whether it is feasible to implement it with
> a single GET request utilizing some clever conditional and/or partial GET
> semantics?
>
> Thank you very much.
>
>
>
>
> ==========================================================================
> There are two kinds of people. Those whose guns are loaded and those who
> dig.
> *(The good, the bad and the ugly).*
> So let us raise our cups for our guns always be loaded.
>
>
>
Hi Lasse, > If your client can control headers, you could use Accept-Language (http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.4) - leaving out the Accept-Language header would then be the canonical representation. > > Thoughts? Could you clarify this with an example? Cheers, Ruben
Hello
This is my first post which is going to be first of many. The company I
work for have started to create RESTful services with most of the
development being outsourced.
Our first service is for user authentication. When a user enters an
incorrect username and password the browser receives a status code of
200 and the response body representation is:
{ "state": "FAILED", "responseCode": 400, "timestamp": 1310378271300,
"anies": [ { "errorCode": "-6600", "errorType":
"MSG_ERR_EMPTY_ACCOUNT_API_KEY", "translation": { "lang": "en",
"value": "Provided login is empty" }, "property":"apiKey" }, {
"errorCode": "-6601", "errorType":
"MSG_ERR_EMPTY_ACCOUNT_API_PASSWORD", "translation": {
"lang":"en", "value":"Provided password is empty" },
"property": "apiPassword" } ]}
The browser interacts with a controller which in turn calls a web
service. We will have clients interacting with the services directly.
The representation above contains the state of failure (400), an
internal error code so a client of the service can look up what the
error is in a particular language and a translation of the error which
the browser will use to display on screen. The "property" attribute is
the form element/ parameter the error corresponds to.
This feels incorrect to me.
1. Should the browser receive a status code of 400 and then look at
the representation why it failed?
2. Should there be a attribute for translated text or would it make
sense to have the text already translated if the accept header is en,
fr, etc?
Thank you
Hi, > • Should the browser receive a status code of 400 and then look at the representation why it failed? A "200" status code indicate success, and this is not the case. Currently, your example is not RESTful, e.g., you're not using HTTP as intended, but rather as a tunnel protocol. But you shouldn't return 400, which would mean that the request has a bad syntax. Rather, return 401Unauthorized, which can be used for both "no credentials" and "wrong credentials". > • Should there be a attribute for translated text or would it make sense to have the text already translated if the accept header is en, fr, etc? You should indeed use the Accept-Language header and try to match the preferences of the user. If no language is specified, use a default. Cheers, Ruben
> The confusion arising here is because the web browser is talking to a controller which in turn talks to the web services, the web service will return 400/401 to the server-side controller but because the request was still successful the web browser receives a 200 OK. I disagree here. The request for the resource was NOT successful. Yes, the client was able to reach the server and the server was able to return a response, so the HTTP communication was successful. But the request was not. If it was, the client would have bypassed authentication. It does not really matter how it works behind the scenes. If you're doing REST, getting a resource without proper authentication should return 401. > JavaScript is then used to validate the JSON to see if it was successful. I would like to just confirm with you as it is difficult finding any material on a web browser using a controller. All the examples and books I have read are clients/browser talking directly to the web services. That doesn't make any difference, as I argued before. > If a client is talking to the web services directly then we will return 401 with a message of validation failed. Why would you want to act differently? Well, you can… but then it's not REST. Can you clarify the function of the controller? Why is it in between, and is this necessary at all? Cheers, Ruben
Ruben Verborgh wrote: > > It does not really matter how it works behind the scenes. If you're > doing REST, getting a resource without proper authentication should > return 401. > Your responses are mostly spot-on; however, there's nothing inherently wrong with a 200 response for unauthorized users. An authentication challenge is not a requirement of REST or HTTP. Another option is 403. But you're right in this case -- the OP is breaking the layered-system constraint by returning the actual error in the payload instead of as a status code, based on an implementation detail which (in REST) is opaque behind the uniform interface. What happens when one back-end process talks to another over HTTP (or any other protocol) is none of the requesting clients' business; the request it made failed, whether or not some back-end HTTP request succeeded. -Eric
"rest_ilyas" wrote: > > The browser interacts with a controller which in turn calls a web > service. > The REST architectural style doesn't use request brokers. This may well be how your system works in practice, but conceptually it's at odds with the REST method of user-agents interacting directly with object interfaces. Design your controller as a web service, instead, which encapsulates the other web service (?) while abstracting away any need for user-agents to understand the other service. -Eric
> there's nothing inherently > wrong with a 200 response for unauthorized users. An authentication > challenge is not a requirement of REST or HTTP. Another option is 403. I agree that it is not a requirement of REST or HTTP, but it is a requirement for this particular application (as far as I understood from the question). Here, an unauthorized user cannot access the resource, so it seems appropriate to return an error code. Otherwise, the payload should be parsed to understand that authentication is required, which is probably not what we want. Cheers, Ruben
Ruben Verborgh wrote: > > Do you consider this missing inverse a hole in the spec, or can you > see justifications for this? > That's a tricky question. There's obviously room for improvements to content negotiation in a successor protocol to HTTP. In HTTP 1.1, no instance of HTTP being stateless may be considered a hole, i.e. here's the justification in a different context: http://lists.w3.org/Archives/Public/ietf-http-wg/2011JulSep/0052.html Different story if redirects would send REFERER, so maybe that's the hole which, if patched, would cause me to use it instead of cookies. (My approach reduces response latency by bypassing all those expensive string comparisons after the initial hit, while enabling RESTful skin switching -- my demo's alternate skin is the same as its primary skin, so it's not obvious that's what I've done. But I don't see this as something missing in the protocol.) -Eric
On Mon, Jul 18, 2011 at 2:50 PM, Eric J. Bowman <eric@...>wrote: > Ruben Verborgh wrote: > > > > Do you consider this missing inverse a hole in the spec, or can you > > see justifications for this? > > > > That's a tricky question. There's obviously room for improvements to > content negotiation in a successor protocol to HTTP. In HTTP 1.1, no > instance of HTTP being stateless may be considered a hole I think I've misunderstood you, but.. what does statelessness of HTTP 1.1 have to do with omitting the inverse of a CL? Cheers, Mike
Mike Kelly wrote: > > what does statelessness of HTTP 1.1 have to do with omitting the > inverse of a CL? > Using the example in the OP, /news/40.es.html could be the response Content-Location for more than one URI, i.e. /news/40 and /headline- news/40 -- think "author's preferred version" from Roy's thesis. So how does the server know which "inverse" to tag a response to a request for /news/40.es.html with, unless the client sends the context as part of the request? REFERER is a partial solution. A cookie can tell the server if the user-agent is in /news/ or /headline-news/ -- with one or the other set as default because some folks disable both REFERER and cookies -- whatever the solution is from there, 303 redirection or a Link header. So the inverse problem can only be partially solved without introducing state to the protocol, which is easily defeated by privacy settings. As this did not occur to Google, we'll all be stuck supporting or accounting for a legacy partial solution long after an open-standard solution comes about which fully solves the problem; this happens more often than not when the big companies go blazing their own trail on the Web. -Eric
On Wed, Jul 20, 2011 at 1:38 PM, Eric J. Bowman <eric@...>wrote:
> **
>
>
> Using the example in the OP, /news/40.es.html could be the response
> Content-Location for more than one URI, i.e. /news/40 and /headline-
> news/40 -- think "author's preferred version" from Roy's thesis. So
> how does the server know which "inverse" to tag a response to a request
> for /news/40.es.html with, unless the client sends the context as part
> of the request?
>
Because one of them objectively is (or could be) the canonical URI. Why is
it important that the server use the client's context to decide the
canonical resource?
Probably, the server could decide on its own if /news/40 or
/headline-news/40 is the URI that is the preferred "please bookmark this"
Conversely, if I came from /headline-news/40 and got connegged 200
w/Content-Location /news/40.es.html, and later retrieved /news/40.es.html
(perhaps by following some self link), then I don't see the need for the
server to know that I came "via" /headline-news/.
REFERER is a partial solution. A cookie can tell the server if the
> user-agent is in /news/ or /headline-news/ -- with one or the other
> set as default because some folks disable both REFERER and cookies --
> whatever the solution is from there, 303 redirection or a Link header.
>
A cookie just introduces (conversational) state. If I have three tabs open,
one on /news/40 and the other on /headline-news/40 and the third one on
/news/40.es.html, then the third tab's {headers|content} depend on which of
the other tabs I refreshed last.
Referer also introduces (conversational) state, although it's managed within
each tab, so it seems to work. But a lot of requests don't have referer.
Like following a bookmark from a browser...
--
-mogsie-
Ruben Verborgh wrote: > Recently, Google started listening to rel="canonical" Link headers [2], which they propose for different content types. However, is this a) an acceptable "standard" solution yes and b) can this be used for other differentiations (language etc.) as well? use rel=alternate and hreflang=Language-Tag
Hi, I've been reading a lot about how to do "proper" REST this week and the more I read, the more I'm lost, especially the HATEOAS part I fear. First, to give some context, the company I work for develops mobile applications for clients. Most of the time, they want to get an iPhone native application, an Android application and a traditional Web based Application to cover the other mobile phones out there. The way we are currently doing things is the good old (bad?) RPC over HTTP way. We define a bunch of URI which are coded inside the different apps, we exchange data as JSON, etc. This week, trying to do things in a better way, I've begin a more serious study of REST and how to do it properly. What I really can't wrap my head around is how, technically, have HATEOAS in a native application? I mean, when building a native application, I have tables to display lists, buttons to do some things, etc. My understanding is that all those should be displayed based on the data (hypermedia) received from the server. Is that right? A concrete example would be a hotel room rental service. The person would open the application and have fields to enter the from/to dates. It would then tap a "Get Available Rooms". The app would call the server and get back a list of rooms along with prices and other details. From there the person could select one room and rent it. The RPC way of coding this is obvious to me but I have no idea how I'd do that in a proper REST way! What bugs me is that every way I look at it, the client application would still be tightly coupled to the service. I understand how I would only need to GET the http://rent-a-room.com URI hardcoded and then in the response I would have the http://rent-a-room.com/available-rooms URI given. But... My application would expect each "call" to return some pre-defined data and "rel", those can't appear out of the blue?! I guess what I'm trying to say is that both the business process and the data exchanged must be known to my client application at the moment of coding it, and those can't change without breaking existing clients. But reading about REST, every is talking about loose coupling and not breaking clients... I just don't see it. What am I missing? Thanks a lot and sorry if it is a stupid question!
>>>>> "Daniel" == Daniel Roussel <daniel@...> writes:
Daniel> Hi, I've been reading a lot about how to do "proper" REST
Daniel> this week and the more I read, the more I'm lost,
Daniel> especially the HATEOAS part I fear.
There are levels of REST, so I suggest you don't worry too much about
HATEOAS.
Using properly named urls and verbs will already give most of the
benefits.
Daniel> I guess what I'm trying to say is that both the business
Daniel> process and the data exchanged must be known to my client
Daniel> application at the moment of coding it, and those can't
Daniel> change without breaking existing clients. But reading
Daniel> about REST, every is talking about loose coupling and not
Daniel> breaking clients... I just don't see it.
Daniel> What am I missing?
Daniel> Thanks a lot and sorry if it is a stupid question!
It isn't, but I myself don't worry too much about this. If your URLS
are stable, and you provide redirects, your server can serve clients
for longer than they will exist in many cases.
--
All the best,
Berend de Boer
------------------------------------------------------
Awesome Drupal hosting: https://www.xplainhosting.com/
On Jul 28, 2011, at 3:23 AM, Berend de Boer wrote: > There are levels of REST, so I suggest you don't worry too much about > HATEOAS. This is serious mis-information, Berend. The intention behind using the REST architectural style is to create systems that have a well determined set of properties (certain performance, certain scalability, certain evolvability ...[1]). These properties are *only* induced if *all* of RESTs constraints are applied. There is no 'I'll do half of the constraints and get half of the benefits' approach. There are no levels of REST. It is an either-or thing. And, in addition to that, the hypermedia constraint is *the* most important one. It is the hardest to understand (at least for me it was) but the most enlightening one also. Instead of putting it aside, I suggest you dive right into it, all the way until you grok it. Jan [1] See the dissertation for all of them.
On Jul 27, 2011, at 5:35 PM, Daniel Roussel wrote: > Hi, > > I've been reading a lot about how to do "proper" REST this week and the more I read, the more I'm lost, especially the HATEOAS part I fear. Don't worry - it was the same for me. It takes your whole mind to shift. Mostly, dumping off OO-brain damage :-) > > First, to give some context, the company I work for develops mobile applications for clients. Most of the time, they want to get an iPhone native application, an Android application and a traditional Web based Application to cover the other mobile phones out there. > > The way we are currently doing things is the good old (bad?) RPC over HTTP way. We define a bunch of URI which are coded inside the different apps, we exchange data as JSON, etc. This week, trying to do things in a better way, I've begin a more serious study of REST and how to do it properly. > > What I really can't wrap my head around is how, technically, have HATEOAS in a native application? I mean, when building a native application, I have tables to display lists, buttons to do some things, etc. My understanding is that all those should be displayed based on the data (hypermedia) received from the server. Is that right? Yes, just like a browser works. > > A concrete example would be a hotel room rental service. The person would open the application and have fields to enter the from/to dates. It would then tap a "Get Available Rooms". The app would call the server and get back a list of rooms along with prices and other details. From there the person could select one room and rent it. > > The RPC way of coding this is obvious to me but I have no idea how I'd do that in a proper REST way! Well, if the end user is a human - use HTML. > What bugs me is that every way I look at it, the client application would still be tightly coupled to the service. I understand how I would only need to GET the http://rent-a-room.com URI hardcoded and then in the response I would have the http://rent-a-room.com/available-rooms URI given. But... My application would expect each "call" to return some pre-defined data and "rel", those can't appear out of the blue?! Right - your client side code should only *react* on stuff it finds, not *expect* it. If there is a human user directly involved, let the human do the 'expecting' (much like you expect certain stuff from Amazon.com even if your browser does not). If the client side code needs to do more stuff without user involvement than a browser (e.g. browser fetches stylesheets, JS, images) you need to roll you own media type and build your app based on the hypermedia controls you define in that media type. > > I guess what I'm trying to say is that both the business process and the data exchanged must be known to my client application at the moment of coding it, and those can't change without breaking existing clients. But reading about REST, every is talking about loose coupling and not breaking clients... I just don't see it. > > What am I missing? > Nothing, really. You are spot on. The thing is that in networked systems the client can never be sure that the server does not change. Instead, the client must be coded to take the least for granted and make the most out of HTTP's error responses to fail most gracefully. REST does not make the problem go away that is hidden by RPC-style approaches. REST simply makes the fact explicit that control over uncontrollable peers is an illusion. - and comes with a bunch of suggestions how to suffer the least from the effects of evolving peers. Most of all REST changes the way you think about networked applications in the first place. Jan > Thanks a lot and sorry if it is a stupid question! > >
Sometimes, we can go on and develop a client solution using web apps, but sometime there is no way out and we need to do a native application. I read some parts of Mr. Fielding thesis again and many of his comments on his blog and I think what wasn't clear (still not totally I fear) to me was what knowledge should be exposed "a priori" and what should be learned "a posteriori". My initial understanding was that "almost" nothing was to be known a priori and that did not make any sense because without some semantic knowledge of the received media, a client application can do nothing useful. What good is it to get a bunch of URI if I have no idea what they are! Now, my understanding of it is that what MUST be known a priori are the Media Types which will be exchanged along with the possible relationship. A particular client would obviously be coded to support this/those media types. Just as a browser understands a resource of type text/html, image/jpeg, etc, my app would understand resources of type application/rent-a-room+xml for example. This is the semantic knowledge needed to perform useful work. This is how a client knows what relation types to look for to navigate. This is how it can know what to present to the screen and how. So in essence, I believe that my theoretical "Room Rental" application could be compared to a web browser which handles "Rent-a-Rooms" documents instead of HTML documents. And what this means, is that this "Rent-a-Room" browser could navigate any server that is serving resources of the type "application/rent-a-room+xml" and on the flip side, a server could provide room rental services to anyone who understand this content type without anyone knowing any implementation details. Am I far off or am I starting to get it a bit more? --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Jul 27, 2011, at 5:35 PM, Daniel Roussel wrote: > > > Hi, > > > > I've been reading a lot about how to do "proper" REST this week and the more I read, the more I'm lost, especially the HATEOAS part I fear. > > Don't worry - it was the same for me. It takes your whole mind to shift. Mostly, dumping off OO-brain damage :-) > > > > > First, to give some context, the company I work for develops mobile applications for clients. Most of the time, they want to get an iPhone native application, an Android application and a traditional Web based Application to cover the other mobile phones out there. > > > > The way we are currently doing things is the good old (bad?) RPC over HTTP way. We define a bunch of URI which are coded inside the different apps, we exchange data as JSON, etc. This week, trying to do things in a better way, I've begin a more serious study of REST and how to do it properly. > > > > What I really can't wrap my head around is how, technically, have HATEOAS in a native application? I mean, when building a native application, I have tables to display lists, buttons to do some things, etc. My understanding is that all those should be displayed based on the data (hypermedia) received from the server. Is that right? > > Yes, just like a browser works. > > > > > > A concrete example would be a hotel room rental service. The person would open the application and have fields to enter the from/to dates. It would then tap a "Get Available Rooms". The app would call the server and get back a list of rooms along with prices and other details. From there the person could select one room and rent it. > > > > The RPC way of coding this is obvious to me but I have no idea how I'd do that in a proper REST way! > > Well, if the end user is a human - use HTML. > > > What bugs me is that every way I look at it, the client application would still be tightly coupled to the service. I understand how I would only need to GET the http://rent-a-room.com URI hardcoded and then in the response I would have the http://rent-a-room.com/available-rooms URI given. But... My application would expect each "call" to return some pre-defined data and "rel", those can't appear out of the blue?! > > Right - your client side code should only *react* on stuff it finds, not *expect* it. If there is a human user directly involved, let the human do the 'expecting' (much like you expect certain stuff from Amazon.com even if your browser does not). > > If the client side code needs to do more stuff without user involvement than a browser (e.g. browser fetches stylesheets, JS, images) you need to roll you own media type and build your app based on the hypermedia controls you define in that media type. > > > > > I guess what I'm trying to say is that both the business process and the data exchanged must be known to my client application at the moment of coding it, and those can't change without breaking existing clients. But reading about REST, every is talking about loose coupling and not breaking clients... I just don't see it. > > > > What am I missing? > > > > Nothing, really. You are spot on. The thing is that in networked systems the client can never be sure that the server does not change. Instead, the client must be coded to take the least for granted and make the most out of HTTP's error responses to fail most gracefully. > > REST does not make the problem go away that is hidden by RPC-style approaches. REST simply makes the fact explicit that control over uncontrollable peers is an illusion. - and comes with a bunch of suggestions how to suffer the least from the effects of evolving peers. > > Most of all REST changes the way you think about networked applications in the first place. > > Jan > > > > Thanks a lot and sorry if it is a stupid question! > > > > > --- In rest-discuss@yahoogroups.com, "Daniel Roussel" <daniel@...> wrote: > > Hi, > > I've been reading a lot about how to do "proper" REST this week and the more I read, the more I'm lost, especially the HATEOAS part I fear. > > > First, to give some context, the company I work for develops mobile applications for clients. Most of the time, they want to get an iPhone native application, an Android application and a traditional Web based Application to cover the other mobile phones out there. > > The way we are currently doing things is the good old (bad?) RPC over HTTP way. We define a bunch of URI which are coded inside the different apps, we exchange data as JSON, etc. This week, trying to do things in a better way, I've begin a more serious study of REST and how to do it properly. > > What I really can't wrap my head around is how, technically, have HATEOAS in a native application? I mean, when building a native application, I have tables to display lists, buttons to do some things, etc. My understanding is that all those should be displayed based on the data (hypermedia) received from the server. Is that right? > > A concrete example would be a hotel room rental service. The person would open the application and have fields to enter the from/to dates. It would then tap a "Get Available Rooms". The app would call the server and get back a list of rooms along with prices and other details. From there the person could select one room and rent it. > > The RPC way of coding this is obvious to me but I have no idea how I'd do that in a proper REST way! What bugs me is that every way I look at it, the client application would still be tightly coupled to the service. I understand how I would only need to GET the http://rent-a-room.com URI hardcoded and then in the response I would have the http://rent-a-room.com/available-rooms URI given. But... My application would expect each "call" to return some pre-defined data and "rel", those can't appear out of the blue?! > > I guess what I'm trying to say is that both the business process and the data exchanged must be known to my client application at the moment of coding it, and those can't change without breaking existing clients. But reading about REST, every is talking about loose coupling and not breaking clients... I just don't see it. > > What am I missing? > > Thanks a lot and sorry if it is a stupid question! >
I agree that Berend is wrong to suggest not worrying about HATEOAS. It's very powerful if you can finally get your head around it. However, I don't agree that you get *none* of the benefits of REST unless you apply *all* of the constraints. Roy Fielding says that if you're not doing HATEOAS, you're not REST. Fair enough. He coined the term, it's his dissertation, he gets to say. But can't you get most of the performance and scalability benefits as well as some robustness from simply building CRUD services correctly using the HTTP verbs and response codes? This reaches level two in the Richardson maturity model and is the foundation of Amazon's S3 storage system. That system may not be RESTful, but it seems to scale and perform quite well. The challenge for me in understanding HATEOAS is when you try to apply it to non-HTML user interfaces, such as iPhone apps. More specifically, confusing a RESTful service and a REST friendly client. Say for example, you have a requirement that the application has to work offline and then sync up data later when it has a network connection. You can do all that syncing up using truly RESTful services, but how in the world can you have the server driving user interface application state when you aren't even connected to the server? So, the "application state" in this example has to be limited to the sync logic - the machine-to-machine communication. But then for the user-interface part, you still have to have have state driven by your understanding of the state transitions at the time that you wrote the code, thus tightly coupling your user interface to the server state-transitions. However, the fact that a client must make assumptions about the service does not mean that the service cannot be RESTful. You just can't always build a RESTful client. I'm open to being wrong about these things, but this is the way I understand it. On Jul 28, 2011, at 4:34 AM, Jan Algermissen wrote: > > On Jul 28, 2011, at 3:23 AM, Berend de Boer wrote: > > > There are levels of REST, so I suggest you don't worry too much about > > HATEOAS. > > This is serious mis-information, Berend. > > The intention behind using the REST architectural style is to create systems that have a well determined set of properties (certain performance, certain scalability, certain evolvability ...[1]). These properties are *only* induced if *all* of RESTs constraints are applied. There is no 'I'll do half of the constraints and get half of the benefits' approach. > > > There are no levels of REST. It is an either-or thing. > > > And, in addition to that, the hypermedia constraint is *the* most important one. It is the hardest to understand (at least for me it was) but the most enlightening one also. Instead of putting it aside, I suggest you dive right into it, all the way until you grok it. > > Jan > > [1] See the dissertation for all of them. > >
I think you are probably asking Jan, but as far as I'm concerned, yes you fundamentally get it. Well said. On Jul 28, 2011, at 10:23 AM, Daniel Roussel wrote: > Sometimes, we can go on and develop a client solution using web apps, but sometime there is no way out and we need to do a native application. > > I read some parts of Mr. Fielding thesis again and many of his comments on his blog and I think what wasn't clear (still not totally I fear) to me was what knowledge should be exposed "a priori" and what should be learned "a posteriori". My initial understanding was that "almost" nothing was to be known a priori and that did not make any sense because without some semantic knowledge of the received media, a client application can do nothing useful. What good is it to get a bunch of URI if I have no idea what they are! > > Now, my understanding of it is that what MUST be known a priori are the Media Types which will be exchanged along with the possible relationship. A particular client would obviously be coded to support this/those media types. Just as a browser understands a resource of type text/html, image/jpeg, etc, my app would understand resources of type application/rent-a-room+xml for example. > > This is the semantic knowledge needed to perform useful work. This is how a client knows what relation types to look for to navigate. This is how it can know what to present to the screen and how. So in essence, I believe that my theoretical "Room Rental" application could be compared to a web browser which handles "Rent-a-Rooms" documents instead of HTML documents. And what this means, is that this "Rent-a-Room" browser could navigate any server that is serving resources of the type "application/rent-a-room+xml" and on the flip side, a server could provide room rental services to anyone who understand this content type without anyone knowing any implementation details. > > Am I far off or am I starting to get it a bit more? > > >
> On Jul 28, 2011, at 3:23 AM, Berend de Boer wrote: > >> There are levels of REST, so I suggest you don't worry too much about >> HATEOAS. > > This is serious mis-information, Berend. > > The intention behind using the REST architectural style is to create systems that have a well determined set of properties (certain performance, certain scalability, certain evolvability ...[1]). These properties are *only* induced if *all* of RESTs constraints are applied. There is no 'I'll do half of the constraints and get half of the benefits' approach. > > There are no levels of REST. It is an either-or thing. Jan - let's not do this to REST. It is a constraint-driven development of software architecture for networked apps. There are many many ways to get certain quality attributes, and making reasonable tradeoffs is an essential part of building that architecture. Subbu
"Daniel Roussel" wrote: > > Now, my understanding of it is that what MUST be known a priori are > the Media Types which will be exchanged along with the possible > relationship. A particular client would obviously be coded to > support this/those media types. Just as a browser understands a > resource of type text/html, image/jpeg, etc, my app would understand > resources of type application/rent-a-room+xml for example. > My usual caveats still apply -- this isn't quite REST, because intermediaries won't understand the media type (which should be registered as application/vnd.rent-a-room+xml as your syntax is reserved for registered, standards-track media types; or, use application/x.rent- a-room+xml if you don't intend to register your media type). On the Web, the desired property of scalability is achieved by re-using ubiquitous media types. There is nothing about renting a room which can't be expressed as HTML, so why not re-use HTML? That way, all the vast-and-sundry components which do interesting things with ubiquitous media types like text/html can participate in the communication. When you send unknowns in Content-Type, you limit yourself to caching, and maybe not even that as many caches are configured to only cache a handful of ubiquitous media types which cover the bulk of the traffic they encounter. Why waste cache resources on media types nobody uses? What do links look like in application/vnd.rent-a-room+xml, and how are intermediaries supposed to recognize them as links? Various other desired properties of REST are impacted as well, like maintainability -- better to re-use standard libraries most developers understand, than custom libraries code maintainers have to be trained on. Most HTTP components out there have a-priori knowledge of <a> in multiple media types, which is why they can follow links. REST is about playing to the capabilities of the deployed Web infrastructure, rather than bucking them. -Eric
Are there any intermediaries that really care about specific media types? If so, it must be the devil's own work trying to keep them up to date as the set of standard media types grows, after those intermediaries have been deployed. Jim
On Thu, Jul 28, 2011 at 7:36 PM, Eric J. Bowman <eric@...>wrote: > "Daniel Roussel" wrote: > > > > Now, my understanding of it is that what MUST be known a priori are > > the Media Types which will be exchanged along with the possible > > relationship. A particular client would obviously be coded to > > support this/those media types. Just as a browser understands a > > resource of type text/html, image/jpeg, etc, my app would understand > > resources of type application/rent-a-room+xml for example. > > > > My usual caveats still apply -- this isn't quite REST, because > intermediaries won't understand the media type (which should be > registered as application/vnd.rent-a-room+xml as your syntax is reserved > for registered, standards-track media types; or, use application/x.rent- > a-room+xml if you don't intend to register your media type). On > the Web, the desired property of scalability is achieved by re-using > ubiquitous media types. There is nothing about renting a room which > can't be expressed as HTML, so why not re-use HTML? That way, all the > vast-and-sundry components which do interesting things with ubiquitous > media types like text/html can participate in the communication. > Because it's a media type that provides a graphical user interface for humans. It carries a lot of unnecessary baggage and under-delivers as a machine interface. I registered a couple of generic media types (hal+xml & hal+json) intended to serve as 'html for machines', and I'm hoping people will adopt it and also contribute to it's continuing development. Cheers, Mike
No offense, sincerely, but I disagree. HTML is a content-type meant to be rendered by a web browser and its semantics are of (almost) no use for any application other then a web browser. Parsing web pages to drive a native iPhone or Android app is not a sensible choice at all. And the choice of html, json or xml is only an issue of resource representation. A resource can have many different representation, that for sure is permitted. There is nothing about renting a room which can't be expressed in english, but I would not send a wav file either. Even more, using mime-type is necessary because we are using HTTP as our protocol, I'm sure we could create a brand new protocol and a different way of defining messages and it could still be RESTful. As for cacheability, the constraint is that resource must be cacheable by the client. The way I interpret this, is that my application should be able to cache a room representation if marked as being cacheable. REST is an architectural style, it does not mandate the use of any protocol or data type, merely how to model a system. Note that I understand the benefit of reusing existing technologies as much as possible, but only when it make sense. Daniel P.S.: Thanks for the media type correction, in my case, the x-perimental ones would be more appropriate. On Thu, Jul 28, 2011 at 2:36 PM, Eric J. Bowman <eric@...>wrote: > "Daniel Roussel" wrote: > > > > Now, my understanding of it is that what MUST be known a priori are > > the Media Types which will be exchanged along with the possible > > relationship. A particular client would obviously be coded to > > support this/those media types. Just as a browser understands a > > resource of type text/html, image/jpeg, etc, my app would understand > > resources of type application/rent-a-room+xml for example. > > > > My usual caveats still apply -- this isn't quite REST, because > intermediaries won't understand the media type (which should be > registered as application/vnd.rent-a-room+xml as your syntax is reserved > for registered, standards-track media types; or, use application/x.rent- > a-room+xml if you don't intend to register your media type). On > the Web, the desired property of scalability is achieved by re-using > ubiquitous media types. There is nothing about renting a room which > can't be expressed as HTML, so why not re-use HTML? That way, all the > vast-and-sundry components which do interesting things with ubiquitous > media types like text/html can participate in the communication. > > When you send unknowns in Content-Type, you limit yourself to caching, > and maybe not even that as many caches are configured to only cache a > handful of ubiquitous media types which cover the bulk of the traffic > they encounter. Why waste cache resources on media types nobody uses? > What do links look like in application/vnd.rent-a-room+xml, and how are > intermediaries supposed to recognize them as links? > > Various other desired properties of REST are impacted as well, like > maintainability -- better to re-use standard libraries most developers > understand, than custom libraries code maintainers have to be trained > on. Most HTTP components out there have a-priori knowledge of <a> in > multiple media types, which is why they can follow links. REST is > about playing to the capabilities of the deployed Web infrastructure, > rather than bucking them. > > -Eric >
On Jul 28, 2011, at 8:25 PM, Subbu Allamaraju wrote: > > On Jul 28, 2011, at 3:23 AM, Berend de Boer wrote: > > > >> There are levels of REST, so I suggest you don't worry too much about > >> HATEOAS. > > > > This is serious mis-information, Berend. > > > > The intention behind using the REST architectural style is to create systems that have a well determined set of properties (certain performance, certain scalability, certain evolvability ...[1]). These properties are *only* induced if *all* of RESTs constraints are applied. There is no 'I'll do half of the constraints and get half of the benefits' approach. > > > > There are no levels of REST. It is an either-or thing. > > Jan - let's not do this to REST. It is a constraint-driven development of software architecture for networked apps. There are many many ways to get certain quality attributes, and making reasonable tradeoffs is an essential part of building that architecture. > Subbu - I am not saying that not doing REST is a bad thing, but REST is (and surely you know that) defined as a set of constraints. And it is exactly these constraints that, as a whole, are REST. Suggesting anything along the lines of 'half-REST' or 'low-REST' or 'levels of REST' just hides the fact that it is critically important to understand the tradeoffs you make when not applying certain constraints. http://www.nordsc.com/ext/classification_of_http_based_apis.html There is no shortcut to thorough understanding of software architectural styles. Jan > Subbu > >
>> There is nothing about renting a room which can't be expressed in english, but I would not send a wav file either. Even more, using mime-type is necessary because we are using HTTP as our protocol This is badly worded, a mime-type identifies the payload type, but the payload type must still be an hypermedia one. HTML is an hypermedia, XML by itself is not, but one can define a specific hypermedia type based on XML or JSON, or even embedded in a new image format. On Thu, Jul 28, 2011 at 2:59 PM, Daniel Roussel <daniel@...> wrote: > No offense, sincerely, but I disagree. HTML is a content-type meant to be > rendered by a web browser and its semantics are of (almost) no use for any > application other then a web browser. Parsing web pages to drive a native > iPhone or Android app is not a sensible choice at all. > > And the choice of html, json or xml is only an issue of resource > representation. A resource can have many different representation, that for > sure is permitted. There is nothing about renting a room which can't be > expressed in english, but I would not send a wav file either. Even more, > using mime-type is necessary because we are using HTTP as our protocol, I'm > sure we could create a brand new protocol and a different way of defining > messages and it could still be RESTful. > > As for cacheability, the constraint is that resource must be cacheable by > the client. The way I interpret this, is that my application should be able > to cache a room representation if marked as being cacheable. REST is an > architectural style, it does not mandate the use of any protocol or data > type, merely how to model a system. Note that I understand the benefit of > reusing existing technologies as much as possible, but only when it make > sense. > > Daniel > > P.S.: Thanks for the media type correction, in my case, the x-perimental > ones would be more appropriate. > > > On Thu, Jul 28, 2011 at 2:36 PM, Eric J. Bowman <eric@...>wrote: > >> "Daniel Roussel" wrote: >> > >> > Now, my understanding of it is that what MUST be known a priori are >> > the Media Types which will be exchanged along with the possible >> > relationship. A particular client would obviously be coded to >> > support this/those media types. Just as a browser understands a >> > resource of type text/html, image/jpeg, etc, my app would understand >> > resources of type application/rent-a-room+xml for example. >> > >> >> My usual caveats still apply -- this isn't quite REST, because >> intermediaries won't understand the media type (which should be >> registered as application/vnd.rent-a-room+xml as your syntax is reserved >> for registered, standards-track media types; or, use application/x.rent- >> a-room+xml if you don't intend to register your media type). On >> the Web, the desired property of scalability is achieved by re-using >> ubiquitous media types. There is nothing about renting a room which >> can't be expressed as HTML, so why not re-use HTML? That way, all the >> vast-and-sundry components which do interesting things with ubiquitous >> media types like text/html can participate in the communication. >> >> When you send unknowns in Content-Type, you limit yourself to caching, >> and maybe not even that as many caches are configured to only cache a >> handful of ubiquitous media types which cover the bulk of the traffic >> they encounter. Why waste cache resources on media types nobody uses? >> What do links look like in application/vnd.rent-a-room+xml, and how are >> intermediaries supposed to recognize them as links? >> >> Various other desired properties of REST are impacted as well, like >> maintainability -- better to re-use standard libraries most developers >> understand, than custom libraries code maintainers have to be trained >> on. Most HTTP components out there have a-priori knowledge of <a> in >> multiple media types, which is why they can follow links. REST is >> about playing to the capabilities of the deployed Web infrastructure, >> rather than bucking them. >> >> -Eric >> > >
The only significant intermediary processable content is html + ESI... which doesn't even have a media type identifier. It probably should though, I mentioned this on the list a while back and some people (Subbu and mnot, I think?) made a noise to say they were going to work on it. Cheers, Mike On Thu, Jul 28, 2011 at 7:44 PM, Jim Webber <jim@...> wrote: > Are there any intermediaries that really care about specific media types? > > If so, it must be the devil's own work trying to keep them up to date as > the set of standard media types grows, after those intermediaries have been > deployed. > > Jim > > ------------------------------------ > > Yahoo! Groups Links > > > >
Jim Webber wrote: > > Are there any intermediaries that really care about specific media > types? > I take the view that there's no way to know; therefore, I make it as easy as possible for such intermediaries to evolve by not precluding their use. Did anyone anticipate Google Desktop Accelerator, which does DNS lookahead on various media types? No, but everyone benefits who uses ubiquitous types whose linking semantics are well-known. Also, antivirus gateways may filter on media types, i.e. images. > > If so, it must be the devil's own work trying to keep them up to date > as the set of standard media types grows, after those intermediaries > have been deployed. > That's just it. Many caches out there only care about a relative handful of media types which have been around forever. Media type proliferation is a problem; whereas the orderly evolution of new, standardized types is not, because it gives intermediary vendors a fighting chance to keep up. -Eric
On Jul 28, 2011, at 12:05 PM, Jan Algermissen wrote: > Subbu - I am not saying that not doing REST is a bad thing, but REST is (and surely you know that) defined as a set of constraints. And it is exactly these constraints that, as a whole, are REST. Suggesting anything along the lines of 'half-REST' or 'low-REST' or 'levels of REST' just hides the fact that it is critically important to understand the tradeoffs you make when not applying certain constraints. > > http://www.nordsc.com/ext/classification_of_http_based_apis.html > > There is no shortcut to thorough understanding of software architectural styles. I don't necessarily buy such a classification for several reasons. It may be good as a learning aid, but no more. In fact, meeting certain qualities takes a lot more in practice than simply following the script. That aside, I don't find the assertion that "is an either-or thing" useful. Subbu
Mike Kelly wrote: > > The only significant intermediary processable content is html + > ESI... which doesn't even have a media type identifier. > Ummm, images? Many ISP-targeted Web accelerator products will muck about with ubiquitous image types, i.e. will compare the displayed size with the raw size (which they can do via their a-priori knowledge of HTML and CSS media types), and shrink images accordingly. Stylesheets are also highly cacheable, text/css is cached by everything which caches text/html. -Eric
Mike Kelly wrote: > > Because it's a media type that provides a graphical user interface for > humans. It carries a lot of unnecessary baggage and under-delivers as > a machine interface. > XForms is easily machine-driven, provided the machine user can be made to understand the markup, which is possible via annotation with RDFa or microdata; HTML forms have been manipulated by machines since forever. > > I registered a couple of generic media types (hal+xml & hal+json) > intended to serve as 'html for machines', and I'm hoping people will > adopt it and also contribute to it's continuing development. > I didn't realize those were standards-track, thought they were vnd.? Regardless, the more they're adopted, the more RESTful they become; uptake provides the incentive for intermediary developers to target them with unique behaviors to increase their scalability. -Eric
Subbu Allamaraju wrote: > > I don't necessarily buy such a classification for several reasons. It > may be good as a learning aid, but no more. In fact, meeting certain > qualities takes a lot more in practice than simply following the > script. That aside, I don't find the assertion that "is an either-or > thing" useful. > Disagree. I see REST as the long-term goal of an evolving system, an ideal against which to measure development. If REST mismatches are ignored, what yardstick is the system's development to be measured by? So I see REST as black-and-white, but with no implied criticism towards systems which don't measure up. There are aspects of my own system which aren't RESTful: in some cases they're considered bugs, while in others I simply don't care because being strictly RESTful brings nothing to the table -- but my understanding of the mismatches is critical to my knowing the difference and being able to categorize/prioritize. I don't see what is to be gained by pretending those mismatches don't exist. -Eric
On Jul 28, 2011, at 12:45 PM, Eric J. Bowman wrote: > Disagree. Fair enough. > I see REST as the long-term goal of an evolving system, an > ideal against which to measure development. If REST mismatches are > ignored, what yardstick is the system's development to be measured by? > So I see REST as black-and-white, but with no implied criticism towards > systems which don't measure up. There are aspects of my own system > which aren't RESTful: in some cases they're considered bugs, while in > others I simply don't care because being strictly RESTful brings nothing > to the table -- but my understanding of the mismatches is critical to my > knowing the difference and being able to categorize/prioritize. I don't > see what is to be gained by pretending those mismatches don't exist. There is no pretension. There are folks who are blindly looking at these classifications trying to comply without understanding why they should or should not care. Then there are hecklers picking the same classifications and asking to show something RESTful in real world. This is the reality today, and uncompromising guidance is not what is needed. Subbu
On Thu, Jul 28, 2011 at 8:34 PM, Eric J. Bowman <eric@...>wrote: > Mike Kelly wrote: > > > > Because it's a media type that provides a graphical user interface for > > humans. It carries a lot of unnecessary baggage and under-delivers as > > a machine interface. > > > > XForms is easily machine-driven, provided the machine user can be made > to understand the markup, which is possible via annotation with RDFa or > microdata; HTML forms have been manipulated by machines since forever. > Right, that is a lot of baggage just to cover the basic hypertext requirements of most machine applications. Generally, people want to make adoption of their service as painless as possible for their consumers. > > > > I registered a couple of generic media types (hal+xml & hal+json) > > intended to serve as 'html for machines', and I'm hoping people will > > adopt it and also contribute to it's continuing development. > > > > the more they're adopted, the more RESTful they become; > uptake provides the incentive for intermediary developers to target > them with unique behaviors to increase their scalability. > > I had considered adding a parameter for the hal media type identifiers for indicating edge-processability of the content. I imagine intermediary processing would be based off of hal's link/embed interface but actually focused on the link relations (standardised and/or URI identified). Sound worthwhile? Cheers, Mike
Daniel Roussel wrote: > > No offense, sincerely, but I disagree. HTML is a content-type meant > to be rendered by a web browser and its semantics are of (almost) no > use for any application other then a web browser. > I've found the semantics of HTML quite useful in a variety of contexts; for example, <ul> and <ol> are universally understood methods of presenting lists of 'stuff' with specific semantics (whether or not their order is important); <table> is quite useful for serializing arrays. So I can model many custom data types using HTML media types, and have it understood perfectly well by non-browser consumers using standard libraries commonly found all over the Web. > > Parsing web pages to drive a native iPhone or Android app is not a > sensible choice at all. > OTOH, what I see as not sensible, is for these native apps to re-invent various wheels like buttons, which could just as easily have been implemented using standard markup. As I am unfamiliar with such apps, I don't know whether they're REST or anti-REST, though. > > And the choice of html, json or xml is only an issue of resource > representation. > Not really. I know how to make hyperlinks universally understood in HTML media types, as every component out there groks the semantics of <a> and <link/>, but I don't know of any universally understood linking semantics for JSON; XML supports XInclude and XLink but not <a> or <link/>, so I choose a media type with semantics which match my needs. > > A resource can have many different representation, that for sure is > permitted. There is nothing about renting a room which can't be > expressed in english, but I would not send a wav file either. > For exactly the same reasons that sending it as JSON makes no sense -- lack of semantics in the representation. REST's hypertext constraint is all about those semantics -- .wav files have no hypertext semantics, which is why they're a poor choice for driving application state. I'm not saying JSON has no place in REST, only that at the present time, it's a poor choice for driving application state. > > Even more, using mime-type is necessary because we are using HTTP as > our protocol, I'm sure we could create a brand new protocol and a > different way of defining messages and it could still be RESTful. > Actually, the use of Internet Media Types is required by REST regardless of protocol; this doesn't mean the syntax of the identifier has to be the same as it appears in the IANA registry, but HTML should still be HTML regardless of what protocol it's served with. > > As for cacheability, the constraint is that resource must be > cacheable by the client. > REST requires the cacheability of representations to be explicitly stated, says nothing about client-cacheable=REST. The difference is between having a library-based API (where intermediaries cannot participate in the communication) and a network-based API (where intermediaries can and do participate), with the latter being the whole point of REST as a style; if no intermediaries grok the representation and most won't cache it since the media type is unknown, the essence of REST is missing. > > The way I interpret this, is that my application should be able to > cache a room representation if marked as being cacheable. REST is an > architectural style, it does not mandate the use of any protocol or > data type, merely how to model a system. > As a network-based API, meaning intermediaries must also be able to participate, which they can't do if they're unfamiliar with the semantics of the representation. Shoe-horning things into HTML, Atom etc. brings about benefits which offset the awkwardness of such implementations, such benefits do not accrue when using library-based APIs. -Eric
Mike Kelly wrote: > > Right, that is a lot of baggage just to cover the basic hypertext > requirements of most machine applications. Generally, people want to > make adoption of their service as painless as possible for their > consumers. > While overlooking the fact that REST considers intermediaries to be consumers, too, not just user-agents. I agree, more work needs to be done on machine-user-targeted media types, but at the present time these are not ubiquitous enough to gain the full benefits of REST. Again, everything comes down to trade-offs; more baggage is A-OK if it leads to greater scalability, whereas if that benefit of REST isn't as important to those using the system, then non-ubiquitous media types may be OK. I see REST as a tool to understand such trade-offs in system design. -Eric
Daniel Roussel wrote: > > This is badly worded, a mime-type identifies the payload type, but the > payload type must still be an hypermedia one. HTML is an hypermedia, > XML by itself is not, but one can define a specific hypermedia type > based on XML or JSON, or even embedded in a new image format. > Pretty much anything can be a hypertext type, consider a .wav served with Link headers. XML is also hypertext, provided the linking is XInclude or XPointer, or XML PIs. Generally, though, raw XML semantics are insufficient for driving application state, which is why application/xml is avoided in REST. -Eric
Daniel Roussel wrote: > > Parsing web pages to drive a native iPhone or Android app is not a > sensible choice at all. > I have a sneaking hunch that a framework could be created for either, which could parse existing web pages (or widgets) and create nifty apps out of them, with a completely different user-experience than they have in a browser. This sort of device-resident transformation layer is in keeping with REST. Then we'll all code to that framework instead of native app coding. I don't know, I haven't kept up with mobile tech for years, is this possible? -Eric
Subbu Allamaraju wrote: > > Eric J. Bowman wrote: > > > Disagree. > > Fair enough. > > > I see REST as the long-term goal of an evolving system, an > > ideal against which to measure development. If REST mismatches are > > ignored, what yardstick is the system's development to be measured > > by? So I see REST as black-and-white, but with no implied criticism > > towards systems which don't measure up. There are aspects of my > > own system which aren't RESTful: in some cases they're considered > > bugs, while in others I simply don't care because being strictly > > RESTful brings nothing to the table -- but my understanding of the > > mismatches is critical to my knowing the difference and being able > > to categorize/prioritize. I don't see what is to be gained by > > pretending those mismatches don't exist. > > There is no pretension. There are folks who are blindly looking at > these classifications trying to comply without understanding why they > should or should not care. Then there are hecklers picking the same > classifications and asking to show something RESTful in real world. > This is the reality today, and uncompromising guidance is not what is > needed. > REST's uniform interface constraints aren't meant as classification tools. My take on REST comes more from Chapter 6, e.g. "REST... capture[s] all of those aspects of a distributed hypermedia system that are considered central to the behavioral and performance requirements of the Web, such that optimizing behavior within the model will result in optimum behavior within the deployed Web architecture." The hypertext constraint isn't more or less important than, say, self- descriptive messaging; those four constraints taken *together* are what distinguishes a network-based API from a library-based API (6.5.1). This is the paradigm shift HTTP unwittingly stumbled upon (which Roy's thesis is about): that instead of distributing objects, generic object interfaces may be distributed with tremendous benefits (Chapter 2.3). Roy puts it best in 6.5.2: " What makes HTTP significantly different from RPC is that the requests are directed to resources using a generic interface with standard semantics that can be interpreted by intermediaries almost as well as by the machines that originate services. The result is an application that allows for layers of transformation and indirection that are independent of the information origin, which is very useful for an Internet-scale, multi-organization, anarchically scalable information system. RPC mechanisms, in contrast, are defined in terms of language APIs, not network-based applications. " Roy's elaboration on that statement, are the four uniform (generic) interface constraints. How is that possible without hypertext driving application state? If you leave that out, you don't have a network- based application. The same can be said for all the uniform interface constraints; without them other constraints, like caching, cannot bring about the desired properties in Chapter 2. Grasping them as a set leads to the inevitable "I have seen the light" moment (which inspired Roy to write 'em down), and charges of shamanism, pedantry or of being uncompromising. Which is my long-standing criticism of REST -- this makes it hard to teach to the point it diminishes the value of REST. But I contend that it's no more valuable to water REST down to include library-based APIs, which as far as I know (as nobody has convincingly falsified Roy's thesis) are all that's possible without any of the uniform interface constraints, even if the results are cacheable. It isn't just missing a constraint, it's missing the point. -Eric
On Jul 28, 2011, at 4:16 PM, Eric J. Bowman wrote: > REST's uniform interface constraints aren't meant as classification > tools. That was in reference to a link that Jan posted in this thread that I was responding to. Subbu
Subbu Allamaraju wrote: > > > REST's uniform interface constraints aren't meant as classification > > tools. > > That was in reference to a link that Jan posted in this thread that I > was responding to. > Right, that's also what I was responding to, I was being politic by not re-mentioning my criticism of that link though... oops. :-) -Eric
Well, everything is possible, really, but this would essentially be reinventing the browser. Performance-wise, it would probably be dog-slow without reinvesting the years of effort that went into current browsers implementations. Also, most clients wanting native apps do so because they feel that a web app would feel alien or be less visible. I can tell that we have made applications in the past which could have been better served by a web app (or as well as); but don't forget the marketing aspect of being "on the app store". The nature of the beast is that clients as much as users prefer native application to web apps in most cases. --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > Daniel Roussel wrote: > > > > Parsing web pages to drive a native iPhone or Android app is not a > > sensible choice at all. > > > > I have a sneaking hunch that a framework could be created for either, > which could parse existing web pages (or widgets) and create nifty apps > out of them, with a completely different user-experience than they have > in a browser. This sort of device-resident transformation layer is in > keeping with REST. Then we'll all code to that framework instead of > native app coding. I don't know, I haven't kept up with mobile tech > for years, is this possible? > > -Eric >
On Thu, Jul 28, 2011 at 2:59 PM, Daniel Roussel <daniel@...> wrote: > > No offense, sincerely, but I disagree. HTML is a content-type meant to > be rendered by a web browser and its semantics are of (almost) no use for > any application other then a web browser. Parsing web pages to drive a > native iPhone or Android app is not a sensible choice at all. > I find this statement odd. Parsing <html><body><ul><li>content</li></ul></body></html> isn't any more overkill than <some_xml><a_list><el>content</el></a_list></some_xml>. The real difference is that everyone knows what a <ul/> element means. -- David blog: http://www.traceback.org twitter: http://twitter.com/dstanek www: http://dstanek.com
On Thu, Jul 28, 2011 at 5:26 PM, Daniel Roussel <daniel@...> wrote: > > > > Well, everything is possible, really, but this would essentially > be reinventing the browser. Performance-wise, it would probably be > dog-slow without reinvesting the years of effort that went into > current browsers implementations. Also, most clients wanting native > apps do so because they feel that a web app would feel alien or be > less visible. I can tell that we have made applications in the past > which could have been better served by a web app (or as well as); > but don't forget the marketing aspect of being "on the app store". > The nature of the beast is that clients as much as users prefer > native application to web apps in most cases. Dan, you're missing the point. You're conflating HTML as a payload media type and HTML as a page presentation and markup language. HTML can easily be a "dual use" technology. Consider this vCard example: <html> <div class="vcard"> <a class="fn org url" href="http://www.commerce.net/">CommerceNet</a> <div class="adr"> <span class="type">Work</span>: <div class="street-address">169 University Avenue</div> <span class="locality">Palo Alto</span>, <abbr class="region" title="California">CA</abbr>Â Â <span class="postal-code">94301</span> <div class="country-name">USA</div> </div> <div class="tel"> <span class="type">Work</span> +1-650-289-4040 </div> <div class="tel"> <span class="type">Fax</span> +1-650-289-4041 </div> <div>Email: <span class="email">info@...</span> </div> </div> </html> That's HTML with a vCard microformat. The difference between that and an equivalent specific XML format for representation is, really, hair splitting. Even the size is likely not dramatically different. Would an XML representation be "tidier"? Sure. But this is HTML, it's also XML, and it has 101 uses and 101 more parsers, viewers, processors, libraries and languages to work with it. Why? Because it's HTML. HTML is everywhere. Everyone knows it, and it's immediately recognizable. Is it immediately recognizable as a vCard? Yea, maybe, if you look at the details of it. To a system that knows how to process vCard, it's pretty clearly a vCard. To Google? Who may care less about vCard? eh...not really important. Processing of this payload is no more dramatic than parsing a specific XML payload. It may have a few more XML elements than a hand tuned one, but that's hardly whats going to crush your performance. Now if you're passing around lots of chrome, and pretty navigation and CSS headers, and other crap, sure the payloads going to bloat up. Who says you have to do that? People also confuse the semantics of the media type with the semantics of the application. The media type is not the application, it's simply a representation. You'll notice that the HTML standard doesn't mention anything about search, shopping carts, formula conversion, or photo storage. All applications that can be implemented using REST and HTML. The application is the application. And it has to be documented, folks and processes need to be trained or coded to that application, etc. etc. Media types are not documentation. Regards, Will Hartung (willh@...)
Will Hartung wrote: > > That's HTML with a vCard microformat. The difference between that and > an equivalent specific XML format for representation is, really, hair > splitting. Even the size is likely not dramatically different. > There's one difference that isn't splitting hairs -- incremental rendering / user-perceived performance. Or, if we're talking about machine users as opposed to humans, stream processing -- can the application start doing useful work before the document has finished transferring? As Roy says in Chapter 2.3.1, "[S]oftware cannot avoid the basic cost of achieving the application needs; e.g., if the application requires that data be located on system A and processed on system B, then the software cannot avoid moving that data from A to B." Being stream- processable compensates for this basic cost, which is why I still prefer XHTML as application/xhtml+xml -- see the very end of Chapter 6.5.4.2, this approach avoids the lazy HTML parsing problem. All too often, I shudder when reviewing media types, as no thought has been given to incremental rendering. Designing markup languages is easy; designing them to drive application state while allowing stream processing is hard. -Eric
Daniel,
Skipping back to the top of this thread, let me see if I can provide a few
examples which might help shed some light.
Here are 3 ways you might use self-descriptive messages in your API
1) Create many domain specific media types (one for each view)
Content-Type: application/rent-a-room+xml
2) Create one domain specific media type
Content-Type: application/vnd.hotels.com+xml
3) Create zero domain specific media types
Content-Type: application/json
Link: </schema/rent-a-room>; rel="describedBy"
All three of these approaches could be seen as satisfying the
self-descriptive messages constraint.
If you create many DSMs, your application might bind the media type to the
view class via some sort of client-side configuration.
"application/rent-a-room+xml" => RentARoomView
If you create one DSM, your media type might specify the semantics by which
a representation specifies details about itself which could be used in
rendering the representation in a GUI.
{"_type": "rent-a-room", ... }
... which you might then bind to a view ...
"rent-a-room" => RentARoomView
If you create zero DSMs, your application might bind the value of the
describedBy link header to a view in the gui.
"/schema/rent-a-room" => RentARoomView
--
An alternative approach would be to create one DSM with a richer semantics
which would effectively allow you to compose the interface from the server
side using code-on-demand and/or more granular views
<link rel="stylesheet" src="/css/screen.css" />
{"_links":[
{"rel":"view","type":"text/javascript","href":"/views/RentARoomView.js"}
{"rel":"commentable","type":"text/javascript","href":"/attributes/commentable.js"}
]}
This Code-on-demand approach would take greatest advantage of the
constraints of REST to create a highly evolvable service by never bind
anything directly to a view class within the application. Instead, your
application would become a user agent, parsing representations and fetching
additional computational resources as necessary to render the view.
Code-on-demand may be significantly less feasible if your client is written
in object C, but perhaps it's something to think about. The embedded links
might not be javascript or CSS, but perhaps some other language used for GUI
composition, such as XUL or a simple DSL.
Finally I'm sure it goes without saying that whatever way you wind up
rendering a representation for a view, the UI would contain links which you
would click to navigate to new screens which are built using the data and
metadata from the representation of the resource identified by the link.
And there you have a few takes on creating an engine of application state
with self-descriptive messages and code-on-demand.
- Kev
c: +001 (650) 521-7791
On Wed, Jul 27, 2011 at 8:35 AM, Daniel Roussel <daniel@...> wrote:
> **
>
>
> Hi,
>
> I've been reading a lot about how to do "proper" REST this week and the
> more I read, the more I'm lost, especially the HATEOAS part I fear.
>
> First, to give some context, the company I work for develops mobile
> applications for clients. Most of the time, they want to get an iPhone
> native application, an Android application and a traditional Web based
> Application to cover the other mobile phones out there.
>
> The way we are currently doing things is the good old (bad?) RPC over HTTP
> way. We define a bunch of URI which are coded inside the different apps, we
> exchange data as JSON, etc. This week, trying to do things in a better way,
> I've begin a more serious study of REST and how to do it properly.
>
> What I really can't wrap my head around is how, technically, have HATEOAS
> in a native application? I mean, when building a native application, I have
> tables to display lists, buttons to do some things, etc. My understanding is
> that all those should be displayed based on the data (hypermedia) received
> from the server. Is that right?
>
> A concrete example would be a hotel room rental service. The person would
> open the application and have fields to enter the from/to dates. It would
> then tap a "Get Available Rooms". The app would call the server and get back
> a list of rooms along with prices and other details. From there the person
> could select one room and rent it.
>
> The RPC way of coding this is obvious to me but I have no idea how I'd do
> that in a proper REST way! What bugs me is that every way I look at it, the
> client application would still be tightly coupled to the service. I
> understand how I would only need to GET the http://rent-a-room.com URI
> hardcoded and then in the response I would have the
> http://rent-a-room.com/available-rooms URI given. But... My application
> would expect each "call" to return some pre-defined data and "rel", those
> can't appear out of the blue?!
>
> I guess what I'm trying to say is that both the business process and the
> data exchanged must be known to my client application at the moment of
> coding it, and those can't change without breaking existing clients. But
> reading about REST, every is talking about loose coupling and not breaking
> clients... I just don't see it.
>
> What am I missing?
>
> Thanks a lot and sorry if it is a stupid question!
>
>
>
I meant to ask this question on the discussion list. A combination of constraints and principles make up the REST architectural style targeted IMHO at solutions that fit a particular problem space. However, as has been mentioned, you get value from using any of the constraints and principles. So you need to ask yourself if your system exists within the problem space or if you just are forcing your system to conform to the architectural style. For example, why do you need hypermedia? On 08/01/2011 08:27 AM, Daniel Roussel wrote: > Hi, > > Well, first of all, our team is physically dispersed, some of them > being in Montreal, some being in Toronto. By using a Rest > architecture more and more, we found that it is easier to have the two > teams design the documents to be exchanged and not worry about an > API/method calls to be properly documented. The less coupling there > is between the client app and the server app, the less coupling there > is between the client app developers and the server developers. This > helps a lot because if a server dev need to completely rewrite one > part of the app, it has no impact on the client and so, no need to > contact the client side team to discuss impacts, new method calls, etc > etc. Could this just be from leveraging HTTP properly? > > Another point is that HTTP and JSON are well supported on most mobile > devices and pretty lightweight. Statelessness is also a design > contraint which makes sense on a phone, you can't assume that a user > will perform a transaction all in one go. Being stateless, you can > have an application maintain its own state, right on the device, and > a "transaction" can thus span many hours if the user network > connection is intermittent. Right, another architectural constraint. > > Also, what I studied and discovered last week, is that by using proper > hypermedia, we will be able to decouple the client and the server even > more. We plan on having one single landing page for all our > applications and from there, the client will be able to navigate and > find its own server. This will simplify things so much! No more need > to call Toronto to ask where have they deployed the latest server for > this or that app. They will update the URI on our landing page and > the clients app will catch up by themselves. What is the client in your case? Is this a machine? > > I could go on and on but in the end, the more we respect the REST > constraints, the simpler and reliable things seem to be. True, but at a definite cost which is why I asked my question. Thanks for sharing. Eb > > Daniel > > On Sun, Jul 31, 2011 at 7:07 PM, Eb <amaeze@... > <mailto:amaeze@...>> wrote: > > Hi Daniel - > > Why do you perceive conforming to the REST architectural style to > be the better way in this case for your problem space? I'm > curious for you to shed some more light on that? > > Eb > > > On 07/27/2011 11:35 AM, Daniel Roussel wrote: >> >> Hi, >> >> I've been reading a lot about how to do "proper" REST this week >> and the more I read, the more I'm lost, especially the HATEOAS >> part I fear. >> >> First, to give some context, the company I work for develops >> mobile applications for clients. Most of the time, they want to >> get an iPhone native application, an Android application and a >> traditional Web based Application to cover the other mobile >> phones out there. >> >> The way we are currently doing things is the good old (bad?) RPC >> over HTTP way. We define a bunch of URI which are coded inside >> the different apps, we exchange data as JSON, etc. This week, >> trying to do things in a better way, I've begin a more serious >> study of REST and how to do it properly. >> >> What I really can't wrap my head around is how, technically, have >> HATEOAS in a native application? I mean, when building a native >> application, I have tables to display lists, buttons to do some >> things, etc. My understanding is that all those should be >> displayed based on the data (hypermedia) received from the >> server. Is that right? >> >> A concrete example would be a hotel room rental service. The >> person would open the application and have fields to enter the >> from/to dates. It would then tap a "Get Available Rooms". The app >> would call the server and get back a list of rooms along with >> prices and other details. From there the person could select one >> room and rent it. >> >> The RPC way of coding this is obvious to me but I have no idea >> how I'd do that in a proper REST way! What bugs me is that every >> way I look at it, the client application would still be tightly >> coupled to the service. I understand how I would only need to GET >> the http://rent-a-room.com URI hardcoded and then in the response >> I would have the http://rent-a-room.com/available-rooms URI >> given. But... My application would expect each "call" to return >> some pre-defined data and "rel", those can't appear out of the >> blue?! >> >> I guess what I'm trying to say is that both the business process >> and the data exchanged must be known to my client application at >> the moment of coding it, and those can't change without breaking >> existing clients. But reading about REST, every is talking about >> loose coupling and not breaking clients... I just don't see it. >> >> What am I missing? >> >> Thanks a lot and sorry if it is a stupid question! >> >> > > > -- > blog:http://eikonne.wordpress.com > twitter:http://twitter.com/eikonne > > -- blog: http://eikonne.wordpress.com twitter: http://twitter.com/eikonne
On Aug 1, 2011, at 3:07 PM, Eb wrote: > I meant to ask this question on the discussion list. A combination of constraints and principles make up the REST architectural style targeted IMHO at solutions that fit a particular problem space. Yes. REST specifically addresses the problem space of networked, decentralized systems, where control over remote peers cannot be assumed. > However, as has been mentioned, you get value from using any of the constraints and principles. Yes, you get some value in specific scenarios, but you also suffer from the trade-offs REST deliberately involves. You should understand whether what you do is a good solution for your problem space (which is at least 50% of the value of Roy's work). > So you need to ask yourself if your system exists within the problem space or if you just are forcing your system to conform to the architectural style. For example, why do you need hypermedia? You need hypermedia to avoid that clients make assumptions about the concrete state machines of the applications they progress through. Because, in a decentralized problem space, it is impossible to guarantee that such assumptions will hold true. Jan > > > On 08/01/2011 08:27 AM, Daniel Roussel wrote: > >> Hi, >> >> Well, first of all, our team is physically dispersed, some of them being in Montreal, some being in Toronto. By using a Rest architecture more and more, we found that it is easier to have the two teams design the documents to be exchanged and not worry about an API/method calls to be properly documented. The less coupling there is between the client app and the server app, the less coupling there is between the client app developers and the server developers. This helps a lot because if a server dev need to completely rewrite one part of the app, it has no impact on the client and so, no need to contact the client side team to discuss impacts, new method calls, etc etc. > Could this just be from leveraging HTTP properly? >> >> Another point is that HTTP and JSON are well supported on most mobile devices and pretty lightweight. Statelessness is also a design contraint which makes sense on a phone, you can't assume that a user will perform a transaction all in one go. Being stateless, you can have an application maintain its own state, right on the device, and a "transaction" can thus span many hours if the user network connection is intermittent. > Right, another architectural constraint. >> >> Also, what I studied and discovered last week, is that by using proper hypermedia, we will be able to decouple the client and the server even more. We plan on having one single landing page for all our applications and from there, the client will be able to navigate and find its own server. This will simplify things so much! No more need to call Toronto to ask where have they deployed the latest server for this or that app. They will update the URI on our landing page and the clients app will catch up by themselves. > > What is the client in your case? Is this a machine? >> >> I could go on and on but in the end, the more we respect the REST constraints, the simpler and reliable things seem to be. > > True, but at a definite cost which is why I asked my question. Thanks for sharing. > > Eb >> >> Daniel >> >> On Sun, Jul 31, 2011 at 7:07 PM, Eb <amaeze@...> wrote: >> Hi Daniel - >> >> Why do you perceive conforming to the REST architectural style to be the better way in this case for your problem space? I'm curious for you to shed some more light on that? >> >> Eb >> >> >> On 07/27/2011 11:35 AM, Daniel Roussel wrote: >>> >>> Hi, >>> >>> I've been reading a lot about how to do "proper" REST this week and the more I read, the more I'm lost, especially the HATEOAS part I fear. >>> >>> First, to give some context, the company I work for develops mobile applications for clients. Most of the time, they want to get an iPhone native application, an Android application and a traditional Web based Application to cover the other mobile phones out there. >>> >>> The way we are currently doing things is the good old (bad?) RPC over HTTP way. We define a bunch of URI which are coded inside the different apps, we exchange data as JSON, etc. This week, trying to do things in a better way, I've begin a more serious study of REST and how to do it properly. >>> >>> What I really can't wrap my head around is how, technically, have HATEOAS in a native application? I mean, when building a native application, I have tables to display lists, buttons to do some things, etc. My understanding is that all those should be displayed based on the data (hypermedia) received from the server. Is that right? >>> >>> A concrete example would be a hotel room rental service. The person would open the application and have fields to enter the from/to dates. It would then tap a "Get Available Rooms". The app would call the server and get back a list of rooms along with prices and other details. From there the person could select one room and rent it. >>> >>> The RPC way of coding this is obvious to me but I have no idea how I'd do that in a proper REST way! What bugs me is that every way I look at it, the client application would still be tightly coupled to the service. I understand how I would only need to GET the http://rent-a-room.com URI hardcoded and then in the response I would have the http://rent-a-room.com/available-rooms URI given. But... My application would expect each "call" to return some pre-defined data and "rel", those can't appear out of the blue?! >>> >>> I guess what I'm trying to say is that both the business process and the data exchanged must be known to my client application at the moment of coding it, and those can't change without breaking existing clients. But reading about REST, every is talking about loose coupling and not breaking clients... I just don't see it. >>> >>> What am I missing? >>> >>> Thanks a lot and sorry if it is a stupid question! >>> >> >> >> >> -- >> blog: >> http://eikonne.wordpress.com >> >> twitter: >> http://twitter.com/eikonne >> > > > -- > blog: > http://eikonne.wordpress.com > > twitter: > http://twitter.com/eikonne > >
Hello Jan : > > On Aug 1, 2011, at 3:07 PM, Eb wrote: > > > I meant to ask this question on the discussion list. A combination of > constraints and principles make up the REST architectural style targeted > IMHO at solutions that fit a particular problem space. > > Yes. REST specifically addresses the problem space of networked, > decentralized systems, where control over remote peers cannot be assumed. > > > However, as has been mentioned, you get value from using any of the > constraints and principles. > > Yes, you get some value in specific scenarios, but you also suffer from the > trade-offs REST deliberately involves. You should understand whether what > you do is a good solution for your problem space (which is at least 50% of > the value of Roy's work). > I concur. > > > So you need to ask yourself if your system exists within the problem > space or if you just are forcing your system to conform to the architectural > style. For example, why do you need hypermedia? > > You need hypermedia to avoid that clients make assumptions about the > concrete state machines of the applications they progress through. Because, > in a decentralized problem space, it is impossible to guarantee that such > assumptions will hold true. > This was somewhat of a rhetorical question. Thanks. > > Jan > > > > > > > > On 08/01/2011 08:27 AM, Daniel Roussel wrote: > > > >> Hi, > >> > >> Well, first of all, our team is physically dispersed, some of them being > in Montreal, some being in Toronto. By using a Rest architecture more and > more, we found that it is easier to have the two teams design the documents > to be exchanged and not worry about an API/method calls to be properly > documented. The less coupling there is between the client app and the > server app, the less coupling there is between the client app developers and > the server developers. This helps a lot because if a server dev need to > completely rewrite one part of the app, it has no impact on the client and > so, no need to contact the client side team to discuss impacts, new method > calls, etc etc. > > Could this just be from leveraging HTTP properly? > >> > >> Another point is that HTTP and JSON are well supported on most mobile > devices and pretty lightweight. Statelessness is also a design contraint > which makes sense on a phone, you can't assume that a user will perform a > transaction all in one go. Being stateless, you can have an application > maintain its own state, right on the device, and a "transaction" can thus > span many hours if the user network connection is intermittent. > > Right, another architectural constraint. > >> > >> Also, what I studied and discovered last week, is that by using proper > hypermedia, we will be able to decouple the client and the server even more. > We plan on having one single landing page for all our applications and from > there, the client will be able to navigate and find its own server. This > will simplify things so much! No more need to call Toronto to ask where > have they deployed the latest server for this or that app. They will update > the URI on our landing page and the clients app will catch up by themselves. > > > > What is the client in your case? Is this a machine? > >> > >> I could go on and on but in the end, the more we respect the REST > constraints, the simpler and reliable things seem to be. > > > > True, but at a definite cost which is why I asked my question. Thanks > for sharing. > > > > Eb > >> > >> Daniel > >> > >> On Sun, Jul 31, 2011 at 7:07 PM, Eb <amaeze@...> wrote: > >> Hi Daniel - > >> > >> Why do you perceive conforming to the REST architectural style to be the > better way in this case for your problem space? I'm curious for you to shed > some more light on that? > >> > >> Eb > >> > >> > >> On 07/27/2011 11:35 AM, Daniel Roussel wrote: > >>> > >>> Hi, > >>> > >>> I've been reading a lot about how to do "proper" REST this week and the > more I read, the more I'm lost, especially the HATEOAS part I fear. > >>> > >>> First, to give some context, the company I work for develops mobile > applications for clients. Most of the time, they want to get an iPhone > native application, an Android application and a traditional Web based > Application to cover the other mobile phones out there. > >>> > >>> The way we are currently doing things is the good old (bad?) RPC over > HTTP way. We define a bunch of URI which are coded inside the different > apps, we exchange data as JSON, etc. This week, trying to do things in a > better way, I've begin a more serious study of REST and how to do it > properly. > >>> > >>> What I really can't wrap my head around is how, technically, have > HATEOAS in a native application? I mean, when building a native application, > I have tables to display lists, buttons to do some things, etc. My > understanding is that all those should be displayed based on the data > (hypermedia) received from the server. Is that right? > >>> > >>> A concrete example would be a hotel room rental service. The person > would open the application and have fields to enter the from/to dates. It > would then tap a "Get Available Rooms". The app would call the server and > get back a list of rooms along with prices and other details. From there the > person could select one room and rent it. > >>> > >>> The RPC way of coding this is obvious to me but I have no idea how I'd > do that in a proper REST way! What bugs me is that every way I look at it, > the client application would still be tightly > coupled to the service. I understand how I would only need to GET the > http://rent-a-room.com URI hardcoded and then in the response I would have > the http://rent-a-room.com/available-rooms URI given. But... My > application would expect each "call" to return some pre-defined data and > "rel", those can't appear out of the blue?! > >>> > >>> I guess what I'm trying to say is that both the business process and > the data exchanged must be known to my client application at the moment of > coding it, and those can't change without breaking > existing clients. But reading about REST, every is talking about loose > coupling and not breaking clients... I just don't see it. > >>> > >>> What am I missing? > >>> > >>> Thanks a lot and sorry if it is a stupid question! > >>> > >> > >> > >> > >> -- > >> blog: > >> http://eikonne.wordpress.com > >> > >> twitter: > >> http://twitter.com/eikonne > >> > > > > > > -- > > blog: > > http://eikonne.wordpress.com > > > > twitter: > > http://twitter.com/eikonne > > > > > >
On Aug 2, 2011, at 12:05 AM, Eb wrote: > This was somewhat of a rhetorical question. Ah, sorry. I thought so, but then wasn't 100% sure so I answered it anyhow :-) Jan
In some posts here it is mentioned, one can use simple html forms for providing a decoupled POST/PUT mechanismen. (and thus also avoiding communicating UriTemplates) I was wondering, if a there is a specific media type for that? Ye, sure one can use just text/html. But to support better visibility, i could imagine a media type text/form+html, that would better communicate the intend... Am i missing something? Is there related work?
It already exists, if I understand your question, except it is application/x-www-form-urlencoded.
--- In rest-discuss@yahoogroups.com, Jason Erickson <jason@...> wrote: > > It already exists, if I understand your question, except it is application/x-www-form-urlencoded. > This is for submitting concrete data to a server. What i meant is the step before that: telling the client dynamically, how to create or update a resource. This is what a html form usually does for a web browser...
On Wed, Aug 3, 2011 at 9:56 PM, Jakob Strauch <jakob.strauch@...> wrote: > But to support better visibility, i could imagine a media type > text/form+html, that would better communicate the intend... > What could an intermediary do with this greater visibility?
> > > > It already exists, if I understand your question, except it is application/x-www-form-urlencoded. > > > > This is for submitting concrete data to a server. What i meant is the step before that: telling the client dynamically, how to create or update a resource. This is what a html form usually does for a web browser... So you want a media type that says, "I am valid HTML but I only contain form elements"? Why do you need that when you can use text/html to return HTML that only contains form elements and the client could see that. For it to be useful, there must exist clients that Accept text/form+html but not text/html, which seems odd to me. (If it can accept text/html, then it would be able to recognize form stuff, too.) I think I must be fundamentally misunderstanding the problem you are trying to address.
> For it to be useful, there must exist clients that Accept text/form+html but not text/html, which seems odd to me. (If it can accept text/html, then it would be able to recognize form stuff, too.) Maybe i misunderstood something, but text/form+html implies, that is a valid html document, too. Like atom+xml is a valid xml file. We use atom+xml for feeds and not plain xml. The intention of svg+xml is also clear. I think the benefit lies on both sides of the network, where i can easly map my media types with my internal representations (e.g. objects). Has the "+" symbol a defined semantic or is it just an convention?
--- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote: > > On Wed, Aug 3, 2011 at 9:56 PM, Jakob Strauch <jakob.strauch@...> wrote: > > > But to support better visibility, i could imagine a media type > > text/form+html, that would better communicate the intend... > > > > What could an intermediary do with this greater visibility? > I don´t know. What does an intermediary do with "atom+xml" instead of "xml"...? By the way: what are intermediaries actual doing except caching and load balancing? I thought the media type is primary used for client/server interaction and secondary for the intermediaries...
On Thu, Aug 4, 2011 at 11:00 AM, Jakob Strauch <jakob.strauch@...> wrote: ** > I don´t know. What does an intermediary do with "atom+xml" instead of > "xml"...? > > By the way: what are intermediaries actual doing except caching and load > balancing? > > I thought the media type is primary used for client/server interaction and > secondary for the intermediaries... > Intermediaries ARE clients, just not YOUR client. Who knows what they're doing. Your server can't tell the difference. The point being made, one that Eric brings up often, is that when you use media types that are intermediary friendly, you likely get better value in the long term. For example, an intermediary cache/proxy may well simply ignore and pass through media types that it doesn't know, and give no value whatsoever (such as caching). So, when working with the vast, wild Internet at large, it's better to use media types that the Internet in the large is familiar with. Regards, Will Hartung (willh@...)
> Intermediaries ARE clients, just not YOUR client. Who knows what they're > doing. Your server can't tell the difference. I know, but what if I care less about intermediaries (think about securing representations - e.g. by https - it will eliminate intermediate caching efforts). I think there is much more (business) value in concentrating on things like hypermedia, integration and interoperability (serendipitous use).
On Thu, Aug 4, 2011 at 7:41 PM, Jakob Strauch <jakob.strauch@...> wrote: > > > Intermediaries ARE clients, just not YOUR client. Who knows what they're > > doing. Your server can't tell the difference. > > I know, but what if I care less about intermediaries (think about securing > representations - e.g. by https - it will eliminate intermediate caching > efforts). > > I think there is much more (business) value in concentrating on things like > hypermedia, integration and interoperability (serendipitous use). > How would creating an additional media type for html forms help with any of that? Cheers, Mike > >
Jakob Strauch wrote: >> For it to be useful, there must exist clients that Accept text/form+html but not text/html, which seems odd to me. (If it can accept text/html, then it would be able to recognize form stuff, too.) > > Maybe i misunderstood something, but text/form+html implies, that is a valid html document, too. Like atom+xml is a valid xml file. We use atom+xml for feeds and not plain xml. The intention of svg+xml is also clear. > > I think the benefit lies on both sides of the network, where i can easly map my media types with my internal representations (e.g. objects). > > Has the "+" symbol a defined semantic or is it just an convention? The +xml suffix is standardized under RFC 3023 and signal's that a media type can be processed by general XML tooling. Other suffixes have not been standardized though, and are more of a convention; however earlier in the year discussion over standardizing +json was taking place between Ned Freed and Larry Masinter IIRC. Best, Nathan
I am working on a HTTP interface to a persietent store that is backed by a slightly tweaked version of Project Voldemort. The problem I am faced with is how to be able to expose a means to the caller a way to invalidate specific reivision IDs without having to rely on out of band information. Currently, there is an option on the table to use only ETags to express this, but I'm not convinced that this is the right approach. I am not expert on ETags, so I thought I'd bring the question up here. Right now, our interface is rather simple and is similar to the proposed Voldemort REST API describe here: https://github.com/afeinberg/voldemort/wiki/REST-API-Proposal Currently, we are using ETags rather than a custom HTTP header to convey revisions. Many of our applications that will use this API will be passing through intermediaries, many of which are already known to strip non-standard HTTP headers. For the most part, the use of ETags works fine to handle typical conditional write operations. If the revision the client has is not current, we can successfully block the write until the caller resolves the conflicts. This works just dandy right now. What doesn't work so good is when we have the condition that the server has more than one version of the value due a downed node or replication issue. Our native API have the following parameters: The new state of the value A collection of the revision IDs to invalidate One of our requirements is to only invalidate the revisions that the client wants to. If the system has version 'a', 'b', and 'c' for resource '/mystore/12345', they should be able to create version 'ab' from 'a' and 'b', leaving version 'c' alone. Here's our process flow works right now: a client issues a GET: GET /mystore/12345 HTTP/1.1 Host: mydb.example.com Accept: application/octet-stream, multipart/mixed;q=0.8, application/json;q=0.7, */*;q=0.5 If there are multiple revisions, they'd get somethig like so: HTTP/1.1 300 Multiple Choices Last-Modified: Thu, 21 Apr 2011 18:24:31 GMT Content-Type: multipart/mixed; boundry="rev" Date: Thu, 28 Apr 2011 17:43:52 GMT --rev Location: http://mydb.example.com/mystore/12345?rev=a Content-Type: application/octet-stream ETag: a Last-Modified: Thu, 21 Apr 2011 18:24:31 GMT ...binary content... --rev Location: http://mydb.example.com/mystore/12345?rev=b Content-Type: application/octet-stream ETag: b Last-Modified: Fri, 22 Apr 2011 18:24:31 GMT ...binary content... --rev Location: http://mydb.example.com/mystore/12345?rev=c Content-Type: application/octet-stream ETag: c Last-Modified: Fri, 22 Apr 2011 18:32:31 GMT ...binary content... We're using mutlipart for clients that prefer to get all revisions in one go, and this response is necessarily etched in stone. Now to resolve revisions 'a' and 'b', it has been suggested that we do soemthing like so: PUT /mystore/12345 HTTP/1.1 Host: mydb.example.com Content-Type: application/octet-stream If-Match: "a", "b" ...new state of the entity... In this case, we should end up with versions 'ab' and 'c'. The problems I see with this approach are as follows: The HTTP specification seems to suggest that in the conditional write case, the server should always maintain a single, current version of the representation. In this case, we've got multiple revisions that are current. Including the Etag in the multipart/mixed response required out-of-band information. The client must know that they must do something special with the etag value for each part. If we use multiple resources for each revision, it also requires out-of-band information in that you need to know that you need the etags from each revision resource you want to merge. As an alternative approach, it is likely better to express the specific versions you want to merge in the URI rather than etags. But now we've got two different ways of resolving conflicts based on the condition. Ryan-
looks like your 300 response returns URIs for each revision. one option is to expose a "factory" resource that allows clients to submit the revision URIs to merge. There are many ways to implement the interaction, but one simplistic possibility is: POST /merge/ ... http://mydb.example.com/ <http://mydb.example.com/mystore/12345?rev=a> mystore/12345?rev=a <http://mydb.example.com/mystore/12345?rev=a> http://mydb.example.com/ <http://mydb.example.com/mystore/12345?rev=b> mystore/12345?rev=b <http://mydb.example.com/mystore/12345?rev=b> 201 Created Location: http://mydb.example.com/<http://mydb.example.com/mystore/12345?rev=a> mystore/12345?rev=a <http://mydb.example.com/mystore/12345?rev=a>b Alternatively/additionally you can: - return 200 OK w/ the actual merged body for final approval before accepting/rejecting the merge (via another POST operation) - return 301 w/ a Location URI to point the user to the completed merge - return 202 Accepted w/ a Location URI that points to a 'progress' document and let the back end do the processing over time (if this is a long merge) mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2011 - Aug 18-20 http://restfest.org On Fri, Aug 5, 2011 at 11:31, Ryan J. McDonough <ryan@...> wrote: > > > I am working on a HTTP interface to a persietent store that is backed by a > slightly tweaked version of Project Voldemort. The problem I am faced with > is how to be able to expose a means to the caller a way to invalidate > specific reivision IDs without having to rely on out of band information. > Currently, there is an option on the table to use only ETags to express > this, but I'm not convinced that this is the right approach. I am not expert > on ETags, so I thought I'd bring the question up here. > > Right now, our interface is rather simple and is similar to the proposed > Voldemort REST API describe here: > > https://github.com/afeinberg/voldemort/wiki/REST-API-Proposal > > Currently, we are using ETags rather than a custom HTTP header to convey > revisions. Many of our applications that will use this API will be passing > through intermediaries, many of which are already known to strip > non-standard HTTP headers. For the most part, the use of ETags works fine to > handle typical conditional write operations. If the revision the client has > is not current, we can successfully block the write until the caller > resolves the conflicts. This works just dandy right now. > > What doesn't work so good is when we have the condition that the server has > more than one version of the value due a downed node or replication issue. > Our native API have the following parameters: > > > - The new state of the value > - A collection of the revision IDs to invalidate > > > One of our requirements is to only invalidate the revisions that the client > wants to. If the system has version 'a', 'b', and 'c' for resource > '/mystore/12345', they should be able to create version 'ab' from 'a' and > 'b', leaving version 'c' alone. > > Here's our process flow works right now: a client issues a GET: > > GET /mystore/12345 HTTP/1.1 > Host: mydb.example.com > Accept: application/octet-stream, multipart/mixed;q=0.8, > application/json;q=0.7, */*;q=0.5 > > If there are multiple revisions, they'd get somethig like so: > > HTTP/1.1 300 Multiple Choices > Last-Modified: Thu, 21 Apr 2011 18:24:31 GMT > Content-Type: multipart/mixed; boundry="rev" > Date: Thu, 28 Apr 2011 17:43:52 GMT > > --rev > Location: http://mydb.example.com/mystore/12345?rev=a > Content-Type: application/octet-stream > ETag: a > Last-Modified: Thu, 21 Apr 2011 18:24:31 GMT > > *...binary content...* > > --rev > Location: http://mydb.example.com/mystore/12345?rev=b > Content-Type: application/octet-stream > ETag: b > Last-Modified: Fri, 22 Apr 2011 18:24:31 GMT > > *...binary content...* > > --rev > Location: http://mydb.example.com/mystore/12345?rev=c > Content-Type: application/octet-stream > ETag: c > Last-Modified: Fri, 22 Apr 2011 18:32:31 GMT > > *...binary content...* > > We're using mutlipart for clients that prefer to get all revisions in one > go, and this response is necessarily etched in stone. Now to resolve > revisions 'a' and 'b', it has been suggested that we do soemthing like so: > > PUT /mystore/12345 HTTP/1.1 > Host: mydb.example.com > Content-Type: application/octet-stream > If-Match: "a", "b" > > ...new state of the entity... > > In this case, we should end up with versions 'ab' and 'c'. > > The problems I see with this approach are as follows: > > > - The HTTP specification seems to suggest that in the conditional write > case, the server should always maintain a single, current version of the > representation. In this case, we've got multiple revisions that are current. > - Including the Etag in the multipart/mixed response required > out-of-band information. The client must know that they must do something > special with the etag value for each part. > - If we use multiple resources for each revision, it also requires > out-of-band information in that you need to know that you need the etags > from each revision resource you want to merge. > > > As an alternative approach, it is likely better to express the specific > versions you want to merge in the URI rather than etags. But now we've got > two different ways of resolving conflicts based on the condition. > > Ryan- > > > > >
Note for example that any user of tethering of an iPhone with o2 in the UK will get through a translating intermediary that rewrites all html pages all the time, aggregating all content in one file (aka includes all the javascript and css right inside the page, trying to compensate the fact that 3g networks can have great bandwidth but crap latency). Of course those same translators also completely ruin xhtml+xml, as they don't bother translating to a correct version. But that's beyond the point. Seb ________________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Eric J. Bowman [eric@...] Sent: 28 July 2011 20:29 To: Mike Kelly Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] REST and HATEOAS in the context of native applications? Mike Kelly wrote: > > The only significant intermediary processable content is html + > ESI... which doesn't even have a media type identifier. > Ummm, images? Many ISP-targeted Web accelerator products will muck about with ubiquitous image types, i.e. will compare the displayed size with the raw size (which they can do via their a-priori knowledge of HTML and CSS media types), and shrink images accordingly. Stylesheets are also highly cacheable, text/css is cached by everything which caches text/html. -Eric ------------------------------------ Yahoo! Groups Links
"Jakob Strauch" wrote: > > > > > > But to support better visibility, i could imagine a media type > > > text/form+html, that would better communicate the intend... > > > > > > > What could an intermediary do with this greater visibility? > > > > I don´t know. What does an intermediary do with "atom+xml" instead of > "xml"...? > Knows to call its Atom processor instead of a raw XML parser. But, the issue here is really, why does the use of forms need to be communicated over the wire? In the case of XForms, application/xhtml+xml is used, and the document's hypertext turns on the XForms processor (if present). But that processor won't be called unless the client knows how to process XHTML as such, which is all the granularity which needs to go over the wire at the protocol layer. Nothing wrong with fine-grained conformance levels / versioning within the representation, so long as the proper coarse-grained processing model is called. > > By the way: what are intermediaries actual doing except caching and > load balancing? > Antivirus gateways, transcoding proxies for wireless networks, DNS precache, ISP Web accelerators (image modification) -- all those things which make the Web anarchically scalable. All you can do is stick to standardized semantics, thereby supporting all those things which never occurred to you. > > I thought the media type is primary used for client/server > interaction and secondary for the intermediaries... > See REST Chapter 6.5.2, the point of the style is "standard semantics which can be interpreted by intermediaries almost as well as by the machines that originate [or consume*] services". This is primarily achieved by exposing the sender's intended processing model for the representation, as a media type. -Eric *another edit for RESTbis...
"Jakob Strauch" wrote: > > > Intermediaries ARE clients, just not YOUR client. Who knows what > > they're doing. Your server can't tell the difference. > > I know, but what if I care less about intermediaries (think about > securing representations - e.g. by https - it will eliminate > intermediate caching efforts). > Then what is your interest in REST? You can design an outhouse as a gazebo, but functionally, it would lack the required privacy. > > I think there is much more (business) value in concentrating on > things like hypermedia, integration and interoperability > (serendipitous use). > I have to pay for bandwidth at the origin server, while maximizing user-perceived performance, so I see plenty of business value in a generic interface which allows intermediaries I don't know about (and which don't know about my system) to scale my system for me. -Eric
Thank you all for your comments. I think my misleading assumption was, that a client can be dynamically tought by the server to construct a message by sending a HTML Form. While a real user would know from the web application context, where to place his name and adress, how could a machine do that? --- In rest-discuss@yahoogroups.com, "Eric J. Bowman" <eric@...> wrote: > > "Jakob Strauch" wrote: > > > > > Intermediaries ARE clients, just not YOUR client. Who knows what > > > they're doing. Your server can't tell the difference. > > > > I know, but what if I care less about intermediaries (think about > > securing representations - e.g. by https - it will eliminate > > intermediate caching efforts). > > > > Then what is your interest in REST? You can design an outhouse as a > gazebo, but functionally, it would lack the required privacy. > > > > > I think there is much more (business) value in concentrating on > > things like hypermedia, integration and interoperability > > (serendipitous use). > > > > I have to pay for bandwidth at the origin server, while maximizing > user-perceived performance, so I see plenty of business value in a > generic interface which allows intermediaries I don't know about (and > which don't know about my system) to scale my system for me. > > -Eric >
Due to previous posts [1] i was asking myself, if it would make sense to standarize the usage of the + sign in media type indications. I think the benfit would be, that a more specific media type like "application/odata+atom+xml" (which is currently not existing) could be at least interpret as "application/atom+xml" (which is currently the media type of an odata resource [2]). An intermediary could look at the media type "application/odata+atom+xml" and could interpret it as a known atom representation, even if he don´t know the odata media type. If atom is also unknown, maybe he is interessted, that it´s a valid xml document, too. This seems to be just an convention at the moment or standarized only for xml-based documents [3]. Furthermore, it would imho calm down the debate about generic vs specific media type. What do you think? [1] http://tech.groups.yahoo.com/group/rest-discuss/message/17665 [2] http://www.odata.org/developers/protocols/atom-format [3] http://www.ietf.org/rfc/rfc3023.txt
On Mon, Aug 8, 2011 at 11:56 PM, Jakob Strauch <jakob.strauch@...> wrote: > ** > > > Thank you all for your comments. I think my misleading assumption was, that > a client can be dynamically tought by the server to construct a message by > sending a HTML Form. > > While a real user would know from the web application context, where to > place his name and adress, how could a machine do that? > It's a prevalent misconception, that somehow REST clients are partially sentient and able to intuit data simply from a payload format. The harsh truth is that clients, and not just machine clients but even human ones, must be taught everything they need to know in order to properly structure requests to the server. Human clients are mostly assumed to have some base knowledge (such as knowing how to fill in their name and address), yet even still there is training on line even as we speak. A simple example is the credit card "security code". Most every decent shopping cart implementation has some kind of "what's this" explanation as to what the code is and where to get it. Machines are no different. Whereas as a designer one can basically assume some baseline of user knowledge, especially for wide spread domains like shopping, there's usually no assumption of that for machine clients. We can, however, rely on institutional knowledge by relying on things like standard formats. If you use HTML, you don't have to explain the semantics of processing HTML as part of your interface. You only have to explain how the HTML is used for your application (i.e. where to get the data bits and what goes in form fields for example). Just like for a human. Much like a human. In a specialized domain (say, health care), you can't simply plop an untrained user in front of a complicated application. By using HTML, they already get those semantics "for free" because they have an ubiquitous browser as an HTML processor. But you still need to train them as what the various requests do, and what the arguments are, and how those arguments are formatted and validated. There's a lot of domain knowledge that has to instilled in that user before they can use the application. A machine client is no different. There's no magic here at this level. Regards, Will Hartung (willh@...)
Le 9 août 2011 à 08:56, Jakob Strauch a écrit : > Thank you all for your comments. I think my misleading assumption was, that a client can be dynamically tought by the server to construct a message by sending a HTML Form. > > While a real user would know from the web application context, where to place his name and adress, how could a machine do that? Hi Jakob, It is quite RESTful to actually use HTML forms to dynamically instruct non human clients how to constructs URLs. So I don't think you were mislead in the first place. The client has to know (e.g, from documentation) things like where to put the name, the address and so on, but it also gets dynamically from the form many information on how to construct the URL: the scheme, host and path can be communicated by the form's action attribute and the query string parameter names can be given by the form's "input" elements (for example, the documentation might state "The actual name of the query parameter for the address is given by the form's input element with an id attribute equals to 'address' "). HTML forms are not as powerful as some other hypermedia controls (e.g., form-derived URLs are always based on query strings) but are often used because they are easy to interpret client-side, and thus might constitute an acceptable compromise between support for hypermedia and actual usability. If/when we get good client libraries for URI templates, XForms or other wonders, this might change. The usefulness of a text/form+html media type is another question, though. Philippe
jason_h_erickson wrote: >> I like it. The only concern is or backwards compatibility. Say I have a resource representation that is not following the convention because I didn't really understand the convention or didn't follow it the way it will be interpreted. For example, take the media type discussed in the previous post that you mention, text/form+html. Hypothetically, if I am producing that as a media type but really what I am sending is a snippet that is not a valid HTML document, is there any reason to fear that anything would stop working if proxies started interpreting that media type to expect valid HTML? << This situation would be the same as not adhering to any other media type, i think. In fact your snippet would not even be a valid representation of the "derived" media type. I think, the only contraint would be, that e.g. a VALID "application/atom+xml" representation is still a valid "application/xml" representation. As far as i know, this is anyway state of the art. it is just not standardized (except for XML Media types [1]). The backward compatibilty is also given for intermediaries: if they do know the derivation concept, the visibility increases. if they do not know the correlation, they would handle such a media type as an unknown media type. like an intermediary today, who knows xml but not atom, would ignore atom documents. [1] http://www.ietf.org/rfc/rfc3023.txt --- In rest-discuss@yahoogroups.com, "Jakob Strauch" <jakob.strauch@...> wrote: > > Due to previous posts [1] i was asking myself, if it would make sense to standarize the usage of the + sign in media type indications. > > I think the benfit would be, that a more specific media type like "application/odata+atom+xml" (which is currently not existing) could be at least interpret as "application/atom+xml" (which is currently the media type of an odata resource [2]). > > An intermediary could look at the media type "application/odata+atom+xml" and could interpret it as a known atom representation, even if he don´t know the odata media type. If atom is also unknown, maybe he is interessted, that it´s a valid xml document, too. > > This seems to be just an convention at the moment or standarized only for xml-based documents [3]. Furthermore, it would imho calm down the debate about generic vs specific media type. > > What do you think? > > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/17665 > [2] http://www.odata.org/developers/protocols/atom-format > [3] http://www.ietf.org/rfc/rfc3023.txt >
Anyone looked at the way the apple identifiers work? I'd personally love to see standardisation around hierarchical identifiers instead of the +, by having application/xml/atom or even application/xml/html/rdf Any processor could simply go up the tree to know what to do until they fall back on rules for the root media type (here, application, which by definition is to be treated as octet-stream). -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Jakob Strauch Sent: 10 August 2011 14:34 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Re: Media type derivation: standardize the + semantics? jason_h_erickson wrote: >> I like it. The only concern is or backwards compatibility. Say I have a resource representation that is not following the convention because I didn't really understand the convention or didn't follow it the way it will be interpreted. For example, take the media type discussed in the previous post that you mention, text/form+html. Hypothetically, if I am producing that as a media type but really what I am sending is a snippet that is not a valid HTML document, is there any reason to fear that anything would stop working if proxies started interpreting that media type to expect valid HTML? << This situation would be the same as not adhering to any other media type, i think. In fact your snippet would not even be a valid representation of the "derived" media type. I think, the only contraint would be, that e.g. a VALID "application/atom+xml" representation is still a valid "application/xml" representation. As far as i know, this is anyway state of the art. it is just not standardized (except for XML Media types [1]). The backward compatibilty is also given for intermediaries: if they do know the derivation concept, the visibility increases. if they do not know the correlation, they would handle such a media type as an unknown media type. like an intermediary today, who knows xml but not atom, would ignore atom documents. [1] http://www.ietf.org/rfc/rfc3023.txt --- In rest-discuss@yahoogroups.com, "Jakob Strauch" <jakob.strauch@...> wrote: > > Due to previous posts [1] i was asking myself, if it would make sense to standarize the usage of the + sign in media type indications. > > I think the benfit would be, that a more specific media type like "application/odata+atom+xml" (which is currently not existing) could be at least interpret as "application/atom+xml" (which is currently the media type of an odata resource [2]). > > An intermediary could look at the media type "application/odata+atom+xml" and could interpret it as a known atom representation, even if he don´t know the odata media type. If atom is also unknown, maybe he is interessted, that it´s a valid xml document, too. > > This seems to be just an convention at the moment or standarized only for xml-based documents [3]. Furthermore, it would imho calm down the debate about generic vs specific media type. > > What do you think? > > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/17665 > [2] http://www.odata.org/developers/protocols/atom-format > [3] http://www.ietf.org/rfc/rfc3023.txt > ------------------------------------ Yahoo! Groups Links
Call for Papers: Programmatic Interfaces for Web Applications See http://www.computer.org/portal/web/computingnow/iccfp4 for full details. This special issue of IEEE Internet Computing (IC) seeks original articles on topics related to emerging technologies and best development practices that underpin any modern programmatic Web interface. Sample topics include: * best practices, patterns, and anti-patterns of a programmatic Web interface design; * benchmarking and evaluation of programmatic Web interface scalability and performance in large-scale Web applications; * comparisons and empirical evaluation of various styles, protocols, and descriptions for programmatic Web interfaces; * reports and lessons learned from developing programmatic Web interfaces for various application domains and sectors (such as social, e-commerce, video, geospatial, and so on); and * end-to-end engineering of programmatic Web interfaces and their integration with existing back-end applications requiring the development of novel dependable and scalable technology frameworks. Submissions are due Nov. 1, 2011, but please send an email briefly describing what you intend to submit to ic4-2012@... by Oct. 15. Accepted submissions are slated to be published in the July/August 2012 issue of IC. Guest editors: Tomas Vitvar, Cesare Pautasso, Steve Vinoski Any questions, let me know. --steve
> I'd personally love to see standardisation around hierarchical identifiers instead of the +, by having application/xml/atom or even application/xml/html/rdf This would not be backwards compatible... > > Any processor could simply go up the tree to know what to do until they fall back on rules for the root media type (here, application, which by definition is to be treated as octet-stream). > > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Jakob Strauch > Sent: 10 August 2011 14:34 > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] Re: Media type derivation: standardize the + semantics? > > jason_h_erickson wrote: > > >> I like it. The only concern is or backwards compatibility. Say I have a resource representation that is not following the convention because I didn't really understand the convention or didn't follow it the way it will be interpreted. For example, take the media type discussed in the previous post that you mention, text/form+html. Hypothetically, if I am producing that as a media type but really what I am sending is a snippet that is not a valid HTML document, is there any reason to fear that anything would stop working if proxies started interpreting that media type to expect valid HTML? << > > This situation would be the same as not adhering to any other media type, i think. In fact your snippet would not even be a valid representation of the "derived" media type. > > I think, the only contraint would be, that e.g. a VALID "application/atom+xml" representation is still a valid "application/xml" representation. As far as i know, this is anyway state of the art. it is just not standardized (except for XML Media types [1]). > > The backward compatibilty is also given for intermediaries: if they do know the derivation concept, the visibility increases. if they do not know the correlation, they would handle such a media type as an unknown media type. like an intermediary today, who knows xml but not atom, would ignore atom documents. > > [1] http://www.ietf.org/rfc/rfc3023.txt > > --- In rest-discuss@yahoogroups.com, "Jakob Strauch" <jakob.strauch@> wrote: > > > > Due to previous posts [1] i was asking myself, if it would make sense to standarize the usage of the + sign in media type indications. > > > > I think the benfit would be, that a more specific media type like "application/odata+atom+xml" (which is currently not existing) could be at least interpret as "application/atom+xml" (which is currently the media type of an odata resource [2]). > > > > An intermediary could look at the media type "application/odata+atom+xml" and could interpret it as a known atom representation, even if he don´t know the odata media type. If atom is also unknown, maybe he is interessted, that it´s a valid xml document, too. > > > > This seems to be just an convention at the moment or standarized only for xml-based documents [3]. Furthermore, it would imho calm down the debate about generic vs specific media type. > > > > What do you think? > > > > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/17665 > > [2] http://www.odata.org/developers/protocols/atom-format > > [3] http://www.ietf.org/rfc/rfc3023.txt > > > > > > > ------------------------------------ > > Yahoo! Groups Links >
That is absolutely true, but then again old implementations can co-exist with new implementations (a la SetCookie2), as the new behavior couldn't possibly work with the old behavior. Say I had: Content-Type: application/xhtml+xml Content-Type2: application/xml/xhtml/rdf With a bit of conneg, the most resembling Content-Type can be returned, alongside an extended media type. An alternative is to use another character, a la application/rdf+xhtml+xml, or use the . notation, application/xml.xhtml.rdf, or even add a random media type attribute with the extended information. ________________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Jakob Strauch [jakob.strauch@...] Sent: 12 August 2011 07:29 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Re: Media type derivation: standardize the + semantics? > I'd personally love to see standardisation around hierarchical identifiers instead of the +, by having application/xml/atom or even application/xml/html/rdf This would not be backwards compatible... > > Any processor could simply go up the tree to know what to do until they fall back on rules for the root media type (here, application, which by definition is to be treated as octet-stream). > > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Jakob Strauch > Sent: 10 August 2011 14:34 > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] Re: Media type derivation: standardize the + semantics? > > jason_h_erickson wrote: > > >> I like it. The only concern is or backwards compatibility. Say I have a resource representation that is not following the convention because I didn't really understand the convention or didn't follow it the way it will be interpreted. For example, take the media type discussed in the previous post that you mention, text/form+html. Hypothetically, if I am producing that as a media type but really what I am sending is a snippet that is not a valid HTML document, is there any reason to fear that anything would stop working if proxies started interpreting that media type to expect valid HTML? << > > This situation would be the same as not adhering to any other media type, i think. In fact your snippet would not even be a valid representation of the "derived" media type. > > I think, the only contraint would be, that e.g. a VALID "application/atom+xml" representation is still a valid "application/xml" representation. As far as i know, this is anyway state of the art. it is just not standardized (except for XML Media types [1]). > > The backward compatibilty is also given for intermediaries: if they do know the derivation concept, the visibility increases. if they do not know the correlation, they would handle such a media type as an unknown media type. like an intermediary today, who knows xml but not atom, would ignore atom documents. > > [1] http://www.ietf.org/rfc/rfc3023.txt > > --- In rest-discuss@yahoogroups.com, "Jakob Strauch" <jakob.strauch@> wrote: > > > > Due to previous posts [1] i was asking myself, if it would make sense to standarize the usage of the + sign in media type indications. > > > > I think the benfit would be, that a more specific media type like "application/odata+atom+xml" (which is currently not existing) could be at least interpret as "application/atom+xml" (which is currently the media type of an odata resource [2]). > > > > An intermediary could look at the media type "application/odata+atom+xml" and could interpret it as a known atom representation, even if he don´t know the odata media type. If atom is also unknown, maybe he is interessted, that it´s a valid xml document, too. > > > > This seems to be just an convention at the moment or standarized only for xml-based documents [3]. Furthermore, it would imho calm down the debate about generic vs specific media type. > > > > What do you think? > > > > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/17665 > > [2] http://www.odata.org/developers/protocols/atom-format > > [3] http://www.ietf.org/rfc/rfc3023.txt > > > > > > > ------------------------------------ > > Yahoo! Groups Links > ------------------------------------ Yahoo! Groups Links
This seems like a big ask from the web ecosystem with very little practical upside. What's the point? Cheers, Mike On Friday, August 12, 2011, Sebastien Lambla <seb@...> wrote: > That is absolutely true, but then again old implementations can co-exist with new implementations (a la SetCookie2), as the new behavior couldn't possibly work with the old behavior. > > Say I had: > Content-Type: application/xhtml+xml > Content-Type2: application/xml/xhtml/rdf > > With a bit of conneg, the most resembling Content-Type can be returned, alongside an extended media type. > > An alternative is to use another character, a la application/rdf+xhtml+xml, or use the . notation, application/xml.xhtml.rdf, or even add a random media type attribute with the extended information. > > > ________________________________________ > From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Jakob Strauch [jakob.strauch@...] > Sent: 12 August 2011 07:29 > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] Re: Media type derivation: standardize the + semantics? > >> I'd personally love to see standardisation around hierarchical identifiers instead of the +, by having application/xml/atom or even application/xml/html/rdf > > This would not be backwards compatible... > > > > >> >> Any processor could simply go up the tree to know what to do until they fall back on rules for the root media type (here, application, which by definition is to be treated as octet-stream). >> >> -----Original Message----- >> From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Jakob Strauch >> Sent: 10 August 2011 14:34 >> To: rest-discuss@yahoogroups.com >> Subject: [rest-discuss] Re: Media type derivation: standardize the + semantics? >> >> jason_h_erickson wrote: >> >> >> I like it. The only concern is or backwards compatibility. Say I have a resource representation that is not following the convention because I didn't really understand the convention or didn't follow it the way it will be interpreted. For example, take the media type discussed in the previous post that you mention, text/form+html. Hypothetically, if I am producing that as a media type but really what I am sending is a snippet that is not a valid HTML document, is there any reason to fear that anything would stop working if proxies started interpreting that media type to expect valid HTML? << >> >> This situation would be the same as not adhering to any other media type, i think. In fact your snippet would not even be a valid representation of the "derived" media type. >> >> I think, the only contraint would be, that e.g. a VALID "application/atom+xml" representation is still a valid "application/xml" representation. As far as i know, this is anyway state of the art. it is just not standardized (except for XML Media types [1]). >> >> The backward compatibilty is also given for intermediaries: if they do know the derivation concept, the visibility increases. if they do not know the correlation, they would handle such a media type as an unknown media type. like an intermediary today, who knows xml but not atom, would ignore atom documents. >> >> [1] http://www.ietf.org/rfc/rfc3023.txt >> >> --- In rest-discuss@yahoogroups.com, "Jakob Strauch" <jakob.strauch@> wrote: >> > >> > Due to previous posts [1] i was asking myself, if it would make sense to standarize the usage of the + sign in media type indications. >> > >> > I think the benfit would be, that a more specific media type like "application/odata+atom+xml" (which is currently not existing) could be at least interpret as "application/atom+xml" (which is currently the media type of an odata resource [2]). >> > >> > An intermediary could look at the media type "application/odata+atom+xml" and could interpret it as a known atom representation, even if he don´t know the odata media type. If atom is also unknown, maybe he is interessted, that it´s a valid xml document, too. >> > >> > This seems to be just an convention at the moment or standarized only for xml-based documents [3]. Furthermore, it would imho calm down the debate about generic vs specific media type. >> > >> > What do you think? >> > >> > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/17665 >> > [2] http://www.odata.org/developers/protocols/atom-format >> > [3] http://www.ietf.org/rfc/rfc3023.txt >> > >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> > > > > > ------------------------------------ > > Yahoo! Groups Links > > > > > > ------------------------------------ > > Yahoo! Groups Links > > <*
I agree in the case of Sebastiens proposal. All i´m proposing, is to standardize something, that is already a (in some cases an unwritten) convention. I think it would improve extensibility and visibility on the web: I can introduce new media types based on existing ones. Any client, e.g. a browser, being aware of the "hierachical media type" concept could render a "application/contact+xhtml+xml" as usual (if he has no clue what contact+html is) or could do something useful with it. For example, providing the user a popup with "Do you want to add this contact to your address book?". Conneg and user agent header would also enable backward compatibility by sending "just application/xhtml+xml". Am i missing something? I feel, that would have a great impact on web interaction... --- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote: > > This seems like a big ask from the web ecosystem with very little practical > upside. > > What's the point? > > Cheers, > Mike > > On Friday, August 12, 2011, Sebastien Lambla <seb@...> wrote: > > That is absolutely true, but then again old implementations can co-exist > with new implementations (a la SetCookie2), as the new behavior couldn't > possibly work with the old behavior. > > > > Say I had: > > Content-Type: application/xhtml+xml > > Content-Type2: application/xml/xhtml/rdf > > > > With a bit of conneg, the most resembling Content-Type can be returned, > alongside an extended media type. > > > > An alternative is to use another character, a la > application/rdf+xhtml+xml, or use the . notation, application/xml.xhtml.rdf, > or even add a random media type attribute with the extended information. > > > > > > ________________________________________ > > From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on > behalf of Jakob Strauch [jakob.strauch@...] > > Sent: 12 August 2011 07:29 > > To: rest-discuss@yahoogroups.com > > Subject: [rest-discuss] Re: Media type derivation: standardize the + > semantics? > > > >> I'd personally love to see standardisation around hierarchical > identifiers instead of the +, by having application/xml/atom or even > application/xml/html/rdf > > > > This would not be backwards compatible... > > > > > > > > > >> > >> Any processor could simply go up the tree to know what to do until they > fall back on rules for the root media type (here, application, which by > definition is to be treated as octet-stream). > >> > >> -----Original Message----- > >> From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] > On Behalf Of Jakob Strauch > >> Sent: 10 August 2011 14:34 > >> To: rest-discuss@yahoogroups.com > >> Subject: [rest-discuss] Re: Media type derivation: standardize the + > semantics? > >> > >> jason_h_erickson wrote: > >> > >> >> I like it. The only concern is or backwards compatibility. Say I > have a resource representation that is not following the convention because > I didn't really understand the convention or didn't follow it the way it > will be interpreted. For example, take the media type discussed in the > previous post that you mention, text/form+html. Hypothetically, if I am > producing that as a media type but really what I am sending is a snippet > that is not a valid HTML document, is there any reason to fear that anything > would stop working if proxies started interpreting that media type to expect > valid HTML? << > >> > >> This situation would be the same as not adhering to any other media type, > i think. In fact your snippet would not even be a valid representation of > the "derived" media type. > >> > >> I think, the only contraint would be, that e.g. a VALID > "application/atom+xml" representation is still a valid "application/xml" > representation. As far as i know, this is anyway state of the art. it is > just not standardized (except for XML Media types [1]). > >> > >> The backward compatibilty is also given for intermediaries: if they do > know the derivation concept, the visibility increases. if they do not know > the correlation, they would handle such a media type as an unknown media > type. like an intermediary today, who knows xml but not atom, would ignore > atom documents. > >> > >> [1] http://www.ietf.org/rfc/rfc3023.txt > >> > >> --- In rest-discuss@yahoogroups.com, "Jakob Strauch" <jakob.strauch@> > wrote: > >> > > >> > Due to previous posts [1] i was asking myself, if it would make sense > to standarize the usage of the + sign in media type indications. > >> > > >> > I think the benfit would be, that a more specific media type like > "application/odata+atom+xml" (which is currently not existing) could be at > least interpret as "application/atom+xml" (which is currently the media type > of an odata resource [2]). > >> > > >> > An intermediary could look at the media type > "application/odata+atom+xml" and could interpret it as a known atom > representation, even if he don´t know the odata media type. If atom is also > unknown, maybe he is interessted, that it´s a valid xml document, too. > >> > > >> > This seems to be just an convention at the moment or standarized only > for xml-based documents [3]. Furthermore, it would imho calm down the debate > about generic vs specific media type. > >> > > >> > What do you think? > >> > > >> > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/17665 > >> > [2] http://www.odata.org/developers/protocols/atom-format > >> > [3] http://www.ietf.org/rfc/rfc3023.txt > >> > > >> > >> > >> > >> > >> ------------------------------------ > >> > >> Yahoo! Groups Links > >> > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > <* >
Looking at the examples for Google´s "Web Intents" initiative [1], it feels like hypermedia. Does someone feel the same? Quote from the web page: "Web Intents is a framework for client-side service discovery and inter-application communication. Services register their intention to be able to handle an action on the user's behalf. Applications request to start an Action of a certain verb (share, edit, view, pick etc) and the system will find the appropriate Services for the user to use based on the user's preference. " [1] http://webintents.org/
Jakob:
Yes, WebIntents looks quite a bit like hypermedia controls. Of course,
hypermedia was around long before Fielding coined REST and there are
many ways to employ hypermedia that do not require adherence to
Fielding's arch model.
From my POV, WebIntents still carry noticeable "RPC" baggage
(startActivity and the callback support), but it's good to see a
familiar 'construct' on the client ("action" ~ rel, type, URI).
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
#RESTFest 2011 - Aug 18-20
http://restfest.org
On Sat, Aug 13, 2011 at 10:34, Jakob Strauch <jakob.strauch@...> wrote:
> Looking at the examples for Google´s "Web Intents" initiative [1], it feels like hypermedia. Does someone feel the same?
>
> Quote from the web page:
>
> "Web Intents is a framework for client-side service discovery and inter-application communication. Services register their intention to be able to handle an action on the user's behalf. Applications request to start an Action of a certain verb (share, edit, view, pick etc) and the system will find the appropriate Services for the user to use based on the user's preference. "
>
> [1] http://webintents.org/
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On Sat, Aug 13, 2011 at 3:20 PM, Jakob Strauch <jakob.strauch@...> wrote: > I agree in the case of Sebastiens proposal. All i´m proposing, is to > standardize something, that is already a (in some cases an unwritten) > convention. > > I think it would improve extensibility and visibility on the web: I can > introduce new media types based on existing ones. You can already base a media type on an existing one by documenting that in it's specification > Any client, e.g. a browser, being aware of the "hierachical media type" > concept could render a "application/contact+xhtml+xml" as usual (if he has > no clue what contact+html is) or could do something useful with it. For > example, providing the user a popup with "Do you want to add this contact to > your address book?". > You don't need a media type identifier to do this, you should do it by standardising a link relation e.g. rel="contact" Cheers, Mike
I for one welcome innovation around the "browser helper" conventions.
The awkwardness and user-hostility of those user flows causes the
invention of schemes (like "iTunes:" or "rss:") to do the work of MIME
types.
It's too bad that these don't use registered MIME types, but then
again a URI is almost the same as a vendor MIME type.
On Sat, Aug 13, 2011 at 8:01 AM, mike amundsen <mamund@...> wrote:
> Jakob:
>
> Yes, WebIntents looks quite a bit like hypermedia controls. Of course,
> hypermedia was around long before Fielding coined REST and there are
> many ways to employ hypermedia that do not require adherence to
> Fielding's arch model.
>
> From my POV, WebIntents still carry noticeable "RPC" baggage
> (startActivity and the callback support), but it's good to see a
> familiar 'construct' on the client ("action" ~ rel, type, URI).
>
> mca
> http://amundsen.com/blog/
> http://twitter.com@mamund
> http://mamund.com/foaf.rdf#me
>
>
> #RESTFest 2011 - Aug 18-20
> http://restfest.org
>
>
>
> On Sat, Aug 13, 2011 at 10:34, Jakob Strauch <jakob.strauch@...> wrote:
>> Looking at the examples for Google´s "Web Intents" initiative [1], it feels like hypermedia. Does someone feel the same?
>>
>> Quote from the web page:
>>
>> "Web Intents is a framework for client-side service discovery and inter-application communication. Services register their intention to be able to handle an action on the user's behalf. Applications request to start an Action of a certain verb (share, edit, view, pick etc) and the system will find the appropriate Services for the user to use based on the user's preference. "
>>
>> [1] http://webintents.org/
>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Mike, The point I was trying to make, which I already made at unconf last year, is about genericity of processing of a media type. I'll re-explain what I mean by it here, not for you as I know you know, and presume you disagree, with most things I say. The minting of a new media type is done to mark the entity body as having semantics that cannot be processed from knowing how to process the base media type. For example, an atom feed cannot be processed as an atom feed without knowing the atom spec, and processing it in XML will not be useful for many users (relative to the size of the internet). On the other hand, if the additional data is added inline with little additional processing needed to understand its structure, then you already have to open the message to do something useful with it and no additional knowledge is needed (from the client perspective) to make something useful out of it. Microdata, profile and things like images to encode information are all in that category. So it's all black or white, leaving little choice for a better "fallback". The +xml convention is not implemneted by anyone I'm aware of to fall-back on xml processing, and even if it was, that's one level, not more. Having the possibility of specializing media types (appllication/xml/atom/vnd.contacts) would enable a *certain* level of flexibility to allow the user-agent to revert to something else, and provide more flexibility in customizing existing media types while still tagging the existance of the flexibility. Note that the interesting discussion relly can be about compound media types, where multiple media types coexist in the same document, or custom media types that contain standardized ones (for example a custom xml type containing fragments of xhtml 1.1). That conversation has been had with no nice outcome <nitpicker corner> If you can't understand why the extended media type syntax is useful but you think +xml is nifty, there is no logic left for you to reply to this email. :) </nitpicker corner> ________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Mike Kelly [mike@...] Sent: 13 August 2011 17:30 To: Jakob Strauch Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: Media type derivation: standardize the + semantics? On Sat, Aug 13, 2011 at 3:20 PM, Jakob Strauch <jakob.strauch@...<mailto:jakob.strauch@...>> wrote: I agree in the case of Sebastiens proposal. All i´m proposing, is to standardize something, that is already a (in some cases an unwritten) convention. I think it would improve extensibility and visibility on the web: I can introduce new media types based on existing ones. You can already base a media type on an existing one by documenting that in it's specification Any client, e.g. a browser, being aware of the "hierachical media type" concept could render a "application/contact+xhtml+xml" as usual (if he has no clue what contact+html is) or could do something useful with it. For example, providing the user a popup with "Do you want to add this contact to your address book?". You don't need a media type identifier to do this, you should do it by standardising a link relation e.g. rel="contact" Cheers, Mike
I'm currently implementing an API which requires some requests to reference other resources exposed by the API. The obvious and cleanest approach seems to be to accept absolute URIs to the other resources as exposed by the API. The only disadvantage I can see is that it's more verbose than accepting ids to the referenced resources, but this is partially offset by avoiding the need to return raw ids in representations in the first place. Does this seem a reasonable approach? I'm only wondering as it doesn't seem that common. Apologies if this has been asked before, it likely has, but the combination of URI and input as search terms defeated the usefulness of this lists search functionality. Thanks in advance, Jim
[My apologies for missing this in the queue - Mark] Check out *REST in Practice*: Hypermedia and Systems Architecture http://t.co/VsVdObY via @oreillymedia Specially read chapters 3 through 5. Read it twice, thrice, ..... :) I myself is trying to understand HATEOAS. It is not easy but the more I read about it the more I begin to appreciate it. For whatever reason I think HATEOAS and Semantic Web are complimentary to each other. Alas I'm also not very good at understanding Semantic Web. My 2 cents. Iqbal On Thu, Jul 28, 2011 at 1:08 PM, Jason Erickson <jason@...>wrote: > ** > > > I think you are probably asking Jan, but as far as I'm concerned, yes you > fundamentally get it. Well said. > > On Jul 28, 2011, at 10:23 AM, Daniel Roussel wrote: > > > > Sometimes, we can go on and develop a client solution using web apps, but > sometime there is no way out and we need to do a native application. > > I read some parts of Mr. Fielding thesis again and many of his comments on > his blog and I think what wasn't clear (still not totally I fear) to me was > what knowledge should be exposed "a priori" and what should be learned "a > posteriori". My initial understanding was that "almost" nothing was to be > known a priori and that did not make any sense because without some semantic > knowledge of the received media, a client application can do nothing useful. > What good is it to get a bunch of URI if I have no idea what they are! > > Now, my understanding of it is that what MUST be known a priori are the > Media Types which will be exchanged along with the possible relationship. A > particular client would obviously be coded to support this/those media > types. Just as a browser understands a resource of type text/html, > image/jpeg, etc, my app would understand resources of type > application/rent-a-room+xml for example. > > This is the semantic knowledge needed to perform useful work. This is how a > client knows what relation types to look for to navigate. This is how it can > know what to present to the screen and how. So in essence, I believe that my > theoretical "Room Rental" application could be compared to a web browser > which handles "Rent-a-Rooms" documents instead of HTML documents. And what > this means, is that this "Rent-a-Room" browser could navigate any server > that is serving resources of the type "application/rent-a-room+xml" and on > the flip side, a server could provide room rental services to anyone who > understand this content type without anyone knowing any implementation > details. > > Am I far off or am I starting to get it a bit more? > > > >
Jim: when referring to others resources, using URIs is the approach I prefer. I don't worry about the 'extra' bytes as this is an issue that, it need be, can be handled using content-encoding (compression)[1], CURIEs[2], or even implementing your own "short-url" pattern for your server. I rarely go beyond compression in my implementations. Also, I do *not* store the URIs in data storage unless absolutely required (i.e. clients might do this since it is the only identifier they have). As much as possible I continue to store internal ID values in the component data storage and I construct the URIs when crafting the representations for the connector. This prevents "leakage" of connector semantics (URIs) into the component storage and vice versa. [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11 [2] http://www.w3.org/TR/curie/ mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2011 - Aug 18-20 http://restfest.org On Tue, Aug 23, 2011 at 11:18, Jim <jimpurbrick@...> wrote: > I'm currently implementing an API which requires some requests to reference > other resources exposed by the API. The obvious and cleanest approach seems > to be to accept absolute URIs to the other resources as exposed by the API. > The only disadvantage I can see is that it's more verbose than accepting ids > to the referenced resources, but this is partially offset by avoiding the > need to return raw ids in representations in the first place. Does this seem > a reasonable approach? I'm only wondering as it doesn't seem that common. > > Apologies if this has been asked before, it likely has, but the combination > of URI and input as search terms defeated the usefulness of this lists > search functionality. > > Thanks in advance, > > Jim > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Tue, Aug 23, 2011 at 9:18 AM, Jim <jimpurbrick@...> wrote: > I'm currently implementing an API which requires some requests to reference > other resources exposed by the API. The obvious and cleanest approach > seems to be to accept absolute URIs to the other resources as exposed by > the API. I have used this approach quite often and it works very well. > The only disadvantage I can see is that it's more verbose than accepting ids > to the referenced resources, but this is partially offset by avoiding the need to > return raw ids in representations in the first place. Does this seem a reasonable > approach? I'm only wondering as it doesn't seem that common. It is more verbose but i have never encountered any practical issues with this approach. Peter barelyenough.org
Jim wrote: > I'm currently implementing an API which requires some requests to > reference other resources exposed by the API. The obvious and cleanest > approach seems to be to accept absolute URIs to the other resources as > exposed by the API. The only disadvantage I can see is that it's more > verbose than accepting ids to the referenced resources, but this is > partially offset by avoiding the need to return raw ids in > representations in the first place. Does this seem a reasonable > approach? I'm only wondering as it doesn't seem that common. It should be more common, as this is exactly how one implements the hypertext constraint [1]. Clients should be given links, not construct them. Keep in mind also that, as long as your media type makes it clear what is a URI and what isn't, you can use relative URI's instead of absolute ones, which quite often means only a few extra bytes per datum. But then, of course, you generally won't worry about "extra bytes" because you're not relying on small message size for performance; instead, you're relying on caching, which counter-intutitively does better with *larger* messages, not smaller ones. Using relative URI's is then simply esthetics for humans. Robert Brewer fumanchu@... [1] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
hello.
On 2011-08-23 14:13 , Robert Brewer wrote:
> Using relative URI's is then simply esthetics for humans.
relative URIs do not introduce any new semantics, so they in fast are
nothing but a shortcut notation. however, they also introduce a new way
for software to fail by not resolving relative URIs properly.
implementing URI resolution (in particular if it allows "stacked
resolution" as in xml:base) is something that is not quite as simple as
it may seem at first sight, and in some cases programmers simply never
bother to do it at all, because all the test cases they ever used were
absolute URIs. i am not saying that relative URIs are not good, on the
contrary, i think they are very good. but documentation should make it
very clear that the identifiers are URIs, and that URI resolution must
be implemented. having relative URIs in test cases usually helps a lot.
cheers,
dret.
--
erik wilde | mailto:dret@... - tel:+1-510-6432253 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
I guess I don't understand the design of your resource space if you're needing to construct the relationships on the client side using resource identities (uris). I'm might be off here, but it seems like a smell where your object relationship semantics that hide behind your resource space are leaking on the client side. This knowledge is and should be hypermedia driven by the server. I might considering modeling the resource interaction differently. Assuming that there is a need to construct these relationships on the client side using resource identities; I wonder how one might construct these resource identities and their representations (relative or absolute) given that they may be represented differently for different content types? Regards, Dilip Krishnan dilip.krishnan@... On Aug 23, 2011, at 4:13 PM, Robert Brewer wrote: > Jim wrote: >> I'm currently implementing an API which requires some requests to >> reference other resources exposed by the API. The obvious and cleanest >> approach seems to be to accept absolute URIs to the other resources as >> exposed by the API. The only disadvantage I can see is that it's more >> verbose than accepting ids to the referenced resources, but this is >> partially offset by avoiding the need to return raw ids in >> representations in the first place. Does this seem a reasonable >> approach? I'm only wondering as it doesn't seem that common. > > It should be more common, as this is exactly how one implements the > hypertext constraint [1]. Clients should be given links, not construct > them. > > Keep in mind also that, as long as your media type makes it clear what > is a URI and what isn't, you can use relative URI's instead of absolute > ones, which quite often means only a few extra bytes per datum. But > then, of course, you generally won't worry about "extra bytes" because > you're not relying on small message size for performance; instead, > you're relying on caching, which counter-intutitively does better with > *larger* messages, not smaller ones. Using relative URI's is then simply > esthetics for humans. > > > Robert Brewer > fumanchu@... > > [1] > http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven > > > ------------------------------------ > > Yahoo! Groups Links > > >
I've been using Jersey (JAX-RS) which really leverages JAXB to serialize your objects into XML and JSON. When I started this project last year, that seemed like a really good idea to me because, like many, my idea of REST was really just JSON and XML over HTTP. However, I haven't really used it for HATEOS. I think later versions of Jersey support adding URI's, so maybe it's still a sufficient framework and I just haven't been using it that way. What are others using or are people just rolling their own resources?
On Tue, Aug 23, 2011 at 7:57 PM, jason_h_erickson <jason@...>wrote: > ** > > > I've been using Jersey (JAX-RS) which really leverages JAXB to serialize > your objects into XML and JSON. When I started this project last year, that > seemed like a really good idea to me because, like many, my idea of REST was > really just JSON and XML over HTTP. > > However, I haven't really used it for HATEOS. I think later versions of > Jersey support adding URI's, so maybe it's still a sufficient framework and > I just haven't been using it that way. What are others using or are people > just rolling their own resources? > I use Apache CXF to build my REST services, but building HATEOAS principles into your services doesn't really require any additional technology support. You can define ad hoc elements that contain pointers to other information, you don't need to define a "link" element in its own namespace (as described in REST in Practice, for instance). Generating a URL is really a trivial thing. Designing the flow of your application is the bigger task, and neither Jersey or CXF can help you much with that. > >
On Tue, Aug 23, 2011 at 8:25 PM, Dilip Krishnan <dilip.krishnan@...> wrote: > I guess I don't understand the design of your resource space if you're needing to > construct the relationships on the client side using resource identities (uris). I'm > might be off here, but it seems like a smell where your object relationship semantics > that hide behind your resource space are leaking on the client side. This knowledge > is and should be hypermedia driven by the server. I might considering modeling the > resource interaction differently. I don't think it is necessarily that the object semantics are leaking. It is extremely common to have resources be related to other resources. Just think about the `a` tag in HTML. These associations are not visible in representations as a result of a leaky abstraction, nor is the ability to modify relationships. Allowing clients to accomplish resource relationship modification using the fundamental abstractions of REST style architectures is appropriate. Certainly there a many situations where using URIs is not necessary. However, if you ever try to integrate two or more systems you quickly realize that URIs are the only way to identify resources that live in other systems. Always using URIs to identify other resources, even if they are currently housed the in the same system, has the advantage of requiring less rework and less disruption to (not fully hypermedia based) clients if some resources are ever relocated to a different system. > Assuming that there is a need to construct these relationships on the client > side using resource identities; I wonder how one might construct these > resource identities and their representations (relative or absolute) given that > they may be represented differently for different content types? One should *not* be constructing resource identities on the client side. Clients should use URIs that are well known, or that have been seen in the representations of other resources. If the client needs to associate resource A with resource B, following the hypermedia instructions in some representation is a very natural way to accomplish that. For example, the client might "see" the appropriate relationship `form` in B's representation, populate A's URI in the appropriate `input` in that form, and then submit the form. Peter barelyenough.org
> I don't think it is necessarily that the object semantics are leaking. > It is extremely common to have resources be related to other > resources. Just think about the `a` tag in HTML. These associations > are not visible in representations as a result of a leaky abstraction, > nor is the ability to modify relationships. Allowing clients to > accomplish resource relationship modification using the fundamental > abstractions of REST style architectures is appropriate. My comment is really around how one can accomplish that; resource relationship modification. There are many ways that clients can accomplish resource relationship modification; however passing uri's around from the client may not be the best approach in my opinion. I can get how the service establishes resource relationships via hypermedia, but I don't understand how the client and do the same. For one the representation of the Uri itself is in question because the client assumes the media type, as conneg is not really an option. Secondly the server needs to infer the target of the relationship represented by the uri, which seems odd to me. Could you give a concrete example of how that could be done? > Certainly there a many situations where using URIs is not necessary. > However, if you ever try to integrate two or more systems you quickly > realize that URIs are the only way to identify resources that live in > other systems. That is true… but how does the service make sense of the uri? it becomes even more difficult in the case of two or more systems. > One should *not* be constructing resource identities on the client > side. Clients should use URIs that are well known, or that have been > seen in the representations of other resources. +1 but it makes the system brittle if the uri space changes. > If the client needs > to associate resource A with resource B, following the hypermedia > instructions in some representation is a very natural way to > accomplish that. For example, the client might "see" the appropriate > relationship `form` in B's representation, populate A's URI in the > appropriate `input` in that form, and then submit the form. Is that good form to do stuff like that? It doesn't feel right to me. Regards Dilip Krishnan
In different resources on the web, i found different opinions about URI Templating. Some say, they are coupling clients and server (e.g. Erik Wilde). Some propose their usage (e.g. subbu). I dont see any coupling - it is not said, that an URI Template connot changed either over time. As long as the server communicates the templates - like URIs - in resource representations, where is the problem? Modern Web browsers support templated URIs, e.g. for search engines. By typing a keyword, e.g. wiki, and a search expression my browser is referring to a specific search page of wikipedia. Do i miss something here? Some opinions about this issue?
On Aug 28, 2011, at 11:00 AM, Jakob Strauch wrote: > In different resources on the web, i found different opinions about URI Templating. Some say, they are coupling clients and server (e.g. Erik Wilde). Some propose their usage (e.g. subbu). > > I dont see any coupling - it is not said, that an URI Template connot changed either over time. As long as the server communicates the templates - like URIs - in resource representations, where is the problem? > > Modern Web browsers support templated URIs, e.g. for search engines. By typing a keyword, e.g. wiki, and a search expression my browser is referring to a specific search page of wikipedia. > > Do i miss something here? Some opinions about this issue? Hi Jakob, URI templates are fine, as long as the specification of the parameters is 'global' (not just defined by the server for the sake of it's own API). My preference is to define the parameters as part of the specification of the link relation that is used with the template. Such a link relation specification must include two things: 1) It must specifiy that the URI references used are to be interpreted as templates (because a template URI is a valid URI and you cannot tell the difference if you do not know up front that you are given a template URI reference). 2) What parameters the client might encounter and what they mean. An example of this is the opensearch specification[1]. Jan [1] http://www.opensearch.org/Specifications/OpenSearch/1.1#OpenSearch_1.1_parameters > >
Thanks! I think you pointed out some clear arguments for something i had only a gut feeling. If i understand this correctly, i could link to an opensearch compatibel resource with an URI Template based on this specification AND the rel="search" attribute specified by [1]? Furthermore, I´m playing around with the HAL specification [2]. I´m asking myself, if i can provide an URI template instead of an URI. Based on your comment, that URI templates are valid URIs and a link´s relation semantic is defined by a specification, i think i can. This would allow linking (highly dynmic) resource collections with a single URI Template without tight coupling. As long as the parameters are part of the specification. [1] http://www.iana.org/assignments/link-relations/link-relations.xml [2] http://stateless.co/hal_specification.html --- In rest-discuss@yahoogroups.com, Jan Algermissen <algermissen1971@...> wrote: > > > On Aug 28, 2011, at 11:00 AM, Jakob Strauch wrote: > > > In different resources on the web, i found different opinions about URI Templating. Some say, they are coupling clients and server (e.g. Erik Wilde). Some propose their usage (e.g. subbu). > > > > I dont see any coupling - it is not said, that an URI Template connot changed either over time. As long as the server communicates the templates - like URIs - in resource representations, where is the problem? > > > > Modern Web browsers support templated URIs, e.g. for search engines. By typing a keyword, e.g. wiki, and a search expression my browser is referring to a specific search page of wikipedia. > > > > Do i miss something here? Some opinions about this issue? > > Hi Jakob, > > URI templates are fine, as long as the specification of the parameters is 'global' (not just defined by the server for the sake of it's own API). > > My preference is to define the parameters as part of the specification of the link relation that is used with the template. Such a link relation specification must include two things: > > 1) It must specifiy that the URI references used are to be interpreted as templates (because a template URI is a valid URI and you cannot tell the difference if you do not know up front that you are given a template URI reference). > > 2) What parameters the client might encounter and what they mean. > > An example of this is the opensearch specification[1]. > > Jan > > > [1] http://www.opensearch.org/Specifications/OpenSearch/1.1#OpenSearch_1.1_parameters > > > > > > > > >
Neither coupling nor change is a problem as long as you're able to control and manage it. Subbu On Aug 28, 2011, at 2:00 AM, Jakob Strauch wrote: > I dont see any coupling - it is not said, that an URI Template connot changed either over time. As long as the server communicates the templates - like URIs - in resource representations, where is the problem?
On Aug 28, 2011, at 2:00 AM, Jakob Strauch wrote: > In different resources on the web, i found different opinions about URI Templating. Some say, they are coupling clients and server (e.g. Erik Wilde). Some propose their usage (e.g. subbu). That actually has nothing to do with templates themselves. Coupling depends on when the client receives the template and how it knows the variable values to use in the template. You can send both from the server on the fly and it has no more coupling than any hypertext form, or you can bake them into the client code and it is fully coupled. So, it has the same properties as URIs in general. ....Roy
Let's imagine I have a collection of information that can contain arbitrary numbers of types of data. I want to only get N certain types out of this collection (and I want to get them versioned specifically eg type1v1 type2v2 type3v1) I can think of a few ways of modelling this the main would be to just go to the list. My initial idea is to use accept to define what they are but this feels a bit odd. Anyone have any ideas? Greg -- Le doute n'est pas une condition agréable, mais la certitude est absurde.
On Aug 31, 2011, at 7:18 PM, Greg Young wrote: > Let's imagine I have a collection of information that can contain > arbitrary numbers of types of data. > > I want to only get N certain types out of this collection (and I want > to get them versioned specifically eg type1v1 type2v2 type3v1) > > I can think of a few ways of modelling this the main would be to just > go to the list. My initial idea is to use accept to define what they > are but this feels a bit odd. > > Anyone have any ideas? Sounds overly complicated. *Why* do you need to do that? Jan > > Greg > > -- > Le doute n'est pas une condition agréable, mais la certitude est absurde. >
Think "restful" access to an event stream where the client can specify its own supported versioning of events. On Wed, Aug 31, 2011 at 2:26 PM, Jan Algermissen <algermissen1971@...> wrote: > > On Aug 31, 2011, at 7:18 PM, Greg Young wrote: > >> Let's imagine I have a collection of information that can contain >> arbitrary numbers of types of data. >> >> I want to only get N certain types out of this collection (and I want >> to get them versioned specifically eg type1v1 type2v2 type3v1) >> >> I can think of a few ways of modelling this the main would be to just >> go to the list. My initial idea is to use accept to define what they >> are but this feels a bit odd. >> >> Anyone have any ideas? > > Sounds overly complicated. *Why* do you need to do that? > > Jan > > >> >> Greg >> >> -- >> Le doute n'est pas une condition agréable, mais la certitude est absurde. >> > > > > ------------------------------------ > > Yahoo! Groups Links > > > > -- Le doute n'est pas une condition agréable, mais la certitude est absurde.
could you not make this work with query/template URIs? On Wed, Aug 31, 2011 at 8:16 PM, Greg Young <gregoryyoung1@...> wrote: > Think "restful" access to an event stream where the client can specify > its own supported versioning of events. > > On Wed, Aug 31, 2011 at 2:26 PM, Jan Algermissen <algermissen1971@...> > wrote: > > > > On Aug 31, 2011, at 7:18 PM, Greg Young wrote: > > > >> Let's imagine I have a collection of information that can contain > >> arbitrary numbers of types of data. > >> > >> I want to only get N certain types out of this collection (and I want > >> to get them versioned specifically eg type1v1 type2v2 type3v1) > >> > >> I can think of a few ways of modelling this the main would be to just > >> go to the list. My initial idea is to use accept to define what they > >> are but this feels a bit odd. > >> > >> Anyone have any ideas? > > > > Sounds overly complicated. *Why* do you need to do that? > > > > Jan > > > > > >> > >> Greg > >> > >> -- > >> Le doute n'est pas une condition agréable, mais la certitude est > absurde. > >> > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > > > > -- > Le doute n'est pas une condition agréable, mais la certitude est absurde. > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Could you put up an example of what you have in mind? On Wed, Aug 31, 2011 at 6:05 PM, Mike Kelly <mike@...> wrote: > could you not make this work with query/template URIs? > > On Wed, Aug 31, 2011 at 8:16 PM, Greg Young <gregoryyoung1@...> wrote: >> >> Think "restful" access to an event stream where the client can specify >> its own supported versioning of events. >> >> On Wed, Aug 31, 2011 at 2:26 PM, Jan Algermissen <algermissen1971@...> >> wrote: >> > >> > On Aug 31, 2011, at 7:18 PM, Greg Young wrote: >> > >> >> Let's imagine I have a collection of information that can contain >> >> arbitrary numbers of types of data. >> >> >> >> I want to only get N certain types out of this collection (and I want >> >> to get them versioned specifically eg type1v1 type2v2 type3v1) >> >> >> >> I can think of a few ways of modelling this the main would be to just >> >> go to the list. My initial idea is to use accept to define what they >> >> are but this feels a bit odd. >> >> >> >> Anyone have any ideas? >> > >> > Sounds overly complicated. *Why* do you need to do that? >> > >> > Jan >> > >> > >> >> >> >> Greg >> >> >> >> -- >> >> Le doute n'est pas une condition agréable, mais la certitude est >> >> absurde. >> >> >> > >> > >> > >> > ------------------------------------ >> > >> > Yahoo! Groups Links >> > >> > >> > >> > >> >> >> >> -- >> Le doute n'est pas une condition agréable, mais la certitude est absurde. >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > -- Le doute n'est pas une condition agréable, mais la certitude est absurde.
On Wed, Aug 31, 2011 at 3:06 PM, Greg Young <gregoryyoung1@...> wrote: > ** > > > Could you put up an example of what you have in mind? > /type/1?rev=1 Regards, Will Hartung (willh@...)
i.e. something like this /event/1234?type=1&v=1 On Wed, Aug 31, 2011 at 11:06 PM, Greg Young <gregoryyoung1@...>wrote: > Could you put up an example of what you have in mind? > > On Wed, Aug 31, 2011 at 6:05 PM, Mike Kelly <mike@...> wrote: > > could you not make this work with query/template URIs? > > > > On Wed, Aug 31, 2011 at 8:16 PM, Greg Young <gregoryyoung1@...> > wrote: > >> > >> Think "restful" access to an event stream where the client can specify > >> its own supported versioning of events. > >> > >> On Wed, Aug 31, 2011 at 2:26 PM, Jan Algermissen < > algermissen1971@...> > >> wrote: > >> > > >> > On Aug 31, 2011, at 7:18 PM, Greg Young wrote: > >> > > >> >> Let's imagine I have a collection of information that can contain > >> >> arbitrary numbers of types of data. > >> >> > >> >> I want to only get N certain types out of this collection (and I want > >> >> to get them versioned specifically eg type1v1 type2v2 type3v1) > >> >> > >> >> I can think of a few ways of modelling this the main would be to just > >> >> go to the list. My initial idea is to use accept to define what they > >> >> are but this feels a bit odd. > >> >> > >> >> Anyone have any ideas? > >> > > >> > Sounds overly complicated. *Why* do you need to do that? > >> > > >> > Jan > >> > > >> > > >> >> > >> >> Greg > >> >> > >> >> -- > >> >> Le doute n'est pas une condition agréable, mais la certitude est > >> >> absurde. > >> >> > >> > > >> > > >> > > >> > ------------------------------------ > >> > > >> > Yahoo! Groups Links > >> > > >> > > >> > > >> > > >> > >> > >> > >> -- > >> Le doute n'est pas une condition agréable, mais la certitude est > absurde. > >> > >> > >> ------------------------------------ > >> > >> Yahoo! Groups Links > >> > >> > >> > > > > > > > > -- > Le doute n'est pas une condition agréable, mais la certitude est absurde. >
I dont think you are quite understanding the problem. OK I have 50000 events. The client wants to get all events that are of types [..] in a version [v] where v is dependent on the type of the event. Doing a get /event/1234?version=3 works fine for a single event but doesn't help with the other part of the interaction. Let's make it concrete. Lets imagine that all 50000 are in order (so 0 is type0..type999 then they repeat). I want to query for all of type t42,t65,t777 to have them come back in order. I could do something like events?t=t42&t=t65&t=t777 but this doesnt work well when I am interested in 100 different types. Does that make sense? On Wed, Aug 31, 2011 at 6:23 PM, Mike Kelly <mike@...> wrote: > i.e. something like this > /event/1234?type=1&v=1 > > > On Wed, Aug 31, 2011 at 11:06 PM, Greg Young <gregoryyoung1@...> > wrote: >> >> Could you put up an example of what you have in mind? >> >> On Wed, Aug 31, 2011 at 6:05 PM, Mike Kelly <mike@...> wrote: >> > could you not make this work with query/template URIs? >> > >> > On Wed, Aug 31, 2011 at 8:16 PM, Greg Young <gregoryyoung1@...> >> > wrote: >> >> >> >> Think "restful" access to an event stream where the client can specify >> >> its own supported versioning of events. >> >> >> >> On Wed, Aug 31, 2011 at 2:26 PM, Jan Algermissen >> >> <algermissen1971@...> >> >> wrote: >> >> > >> >> > On Aug 31, 2011, at 7:18 PM, Greg Young wrote: >> >> > >> >> >> Let's imagine I have a collection of information that can contain >> >> >> arbitrary numbers of types of data. >> >> >> >> >> >> I want to only get N certain types out of this collection (and I >> >> >> want >> >> >> to get them versioned specifically eg type1v1 type2v2 type3v1) >> >> >> >> >> >> I can think of a few ways of modelling this the main would be to >> >> >> just >> >> >> go to the list. My initial idea is to use accept to define what they >> >> >> are but this feels a bit odd. >> >> >> >> >> >> Anyone have any ideas? >> >> > >> >> > Sounds overly complicated. *Why* do you need to do that? >> >> > >> >> > Jan >> >> > >> >> > >> >> >> >> >> >> Greg >> >> >> >> >> >> -- >> >> >> Le doute n'est pas une condition agréable, mais la certitude est >> >> >> absurde. >> >> >> >> >> > >> >> > >> >> > >> >> > ------------------------------------ >> >> > >> >> > Yahoo! Groups Links >> >> > >> >> > >> >> > >> >> > >> >> >> >> >> >> >> >> -- >> >> Le doute n'est pas une condition agréable, mais la certitude est >> >> absurde. >> >> >> >> >> >> ------------------------------------ >> >> >> >> Yahoo! Groups Links >> >> >> >> >> >> >> > >> > >> >> >> >> -- >> Le doute n'est pas une condition agréable, mais la certitude est absurde. > > -- Le doute n'est pas une condition agréable, mais la certitude est absurde.
If the primary question has to do w/ how to craft a query w/ large number of inputs, you can use a POST body to hold the query details and submit that to the server for processing. An advantage of this pattern is that the server can allow lcients to create "query" resources for later listing, selection, and replay. Subbu Allamaraju's "RESTful Web Services Cookbook" has a full chapter on queries: http://www.restful-webservices-cookbook.org/queries/ mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2011 - Aug 18-20 http://restfest.org On Wed, Aug 31, 2011 at 18:30, Greg Young <gregoryyoung1@...> wrote: > I dont think you are quite understanding the problem. > > OK I have 50000 events. > > The client wants to get all events that are of types [..] in a version > [v] where v is dependent on the type of the event. > > Doing a get /event/1234?version=3 works fine for a single event but > doesn't help with the other part of the interaction. > > Let's make it concrete. Lets imagine that all 50000 are in order (so 0 > is type0..type999 then they repeat). I want to query for all of type > t42,t65,t777 to have them come back in order. I could do something > like events?t=t42&t=t65&t=t777 but this doesnt work well when I am > interested in 100 different types. Does that make sense? > > On Wed, Aug 31, 2011 at 6:23 PM, Mike Kelly <mike@...> wrote: > > i.e. something like this > > /event/1234?type=1&v=1 > > > > > > On Wed, Aug 31, 2011 at 11:06 PM, Greg Young <gregoryyoung1@...> > > wrote: > >> > >> Could you put up an example of what you have in mind? > >> > >> On Wed, Aug 31, 2011 at 6:05 PM, Mike Kelly <mike@...> wrote: > >> > could you not make this work with query/template URIs? > >> > > >> > On Wed, Aug 31, 2011 at 8:16 PM, Greg Young <gregoryyoung1@gmail.com> > >> > wrote: > >> >> > >> >> Think "restful" access to an event stream where the client can > specify > >> >> its own supported versioning of events. > >> >> > >> >> On Wed, Aug 31, 2011 at 2:26 PM, Jan Algermissen > >> >> <algermissen1971@...> > >> >> wrote: > >> >> > > >> >> > On Aug 31, 2011, at 7:18 PM, Greg Young wrote: > >> >> > > >> >> >> Let's imagine I have a collection of information that can contain > >> >> >> arbitrary numbers of types of data. > >> >> >> > >> >> >> I want to only get N certain types out of this collection (and I > >> >> >> want > >> >> >> to get them versioned specifically eg type1v1 type2v2 type3v1) > >> >> >> > >> >> >> I can think of a few ways of modelling this the main would be to > >> >> >> just > >> >> >> go to the list. My initial idea is to use accept to define what > they > >> >> >> are but this feels a bit odd. > >> >> >> > >> >> >> Anyone have any ideas? > >> >> > > >> >> > Sounds overly complicated. *Why* do you need to do that? > >> >> > > >> >> > Jan > >> >> > > >> >> > > >> >> >> > >> >> >> Greg > >> >> >> > >> >> >> -- > >> >> >> Le doute n'est pas une condition agréable, mais la certitude est > >> >> >> absurde. > >> >> >> > >> >> > > >> >> > > >> >> > > >> >> > ------------------------------------ > >> >> > > >> >> > Yahoo! Groups Links > >> >> > > >> >> > > >> >> > > >> >> > > >> >> > >> >> > >> >> > >> >> -- > >> >> Le doute n'est pas une condition agréable, mais la certitude est > >> >> absurde. > >> >> > >> >> > >> >> ------------------------------------ > >> >> > >> >> Yahoo! Groups Links > >> >> > >> >> > >> >> > >> > > >> > > >> > >> > >> > >> -- > >> Le doute n'est pas une condition agréable, mais la certitude est > absurde. > > > > > > > > -- > Le doute n'est pas une condition agréable, mais la certitude est absurde. > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Sep 1, 2011, at 12:30 AM, Greg Young wrote: > I dont think you are quite understanding the problem. > > OK I have 50000 events. > > The client wants to get all events that are of types [..] in a version > [v] where v is dependent on the type of the event. > > Doing a get /event/1234?version=3 works fine for a single event but > doesn't help with the other part of the interaction. > > Let's make it concrete. Lets imagine that all 50000 are in order (so 0 > is type0..type999 then they repeat). I want to query for all of type > t42,t65,t777 to have them come back in order. I could do something > like events?t=t42&t=t65&t=t777 but this doesnt work well when I am > interested in 100 different types. Does that make sense? Dunno, but it sounds to me as if looking at your requirements once more might yield significant simplification. This versioning thing just sounds overly complicated. What is the use case behind all this? Jan > > On Wed, Aug 31, 2011 at 6:23 PM, Mike Kelly <mike@...> wrote: > > i.e. something like this > > /event/1234?type=1&v=1 > > > > > > On Wed, Aug 31, 2011 at 11:06 PM, Greg Young <gregoryyoung1@...> > > wrote: > >> > >> Could you put up an example of what you have in mind? > >> > >> On Wed, Aug 31, 2011 at 6:05 PM, Mike Kelly <mike@...> wrote: > >> > could you not make this work with query/template URIs? > >> > > >> > On Wed, Aug 31, 2011 at 8:16 PM, Greg Young <gregoryyoung1@...> > >> > wrote: > >> >> > >> >> Think "restful" access to an event stream where the client can specify > >> >> its own supported versioning of events. > >> >> > >> >> On Wed, Aug 31, 2011 at 2:26 PM, Jan Algermissen > >> >> <algermissen1971@me.com> > >> >> wrote: > >> >> > > >> >> > On Aug 31, 2011, at 7:18 PM, Greg Young wrote: > >> >> > > >> >> >> Let's imagine I have a collection of information that can contain > >> >> >> arbitrary numbers of types of data. > >> >> >> > >> >> >> I want to only get N certain types out of this collection (and I > >> >> >> want > >> >> >> to get them versioned specifically eg type1v1 type2v2 type3v1) > >> >> >> > >> >> >> I can think of a few ways of modelling this the main would be to > >> >> >> just > >> >> >> go to the list. My initial idea is to use accept to define what they > >> >> >> are but this feels a bit odd. > >> >> >> > >> >> >> Anyone have any ideas? > >> >> > > >> >> > Sounds overly complicated. *Why* do you need to do that? > >> >> > > >> >> > Jan > >> >> > > >> >> > > >> >> >> > >> >> >> Greg > >> >> >> > >> >> >> -- > >> >> >> Le doute n'est pas une condition agréable, mais la certitude est > >> >> >> absurde. > >> >> >> > >> >> > > >> >> > > >> >> > > >> >> > ------------------------------------ > >> >> > > >> >> > Yahoo! Groups Links > >> >> > > >> >> > > >> >> > > >> >> > > >> >> > >> >> > >> >> > >> >> -- > >> >> Le doute n'est pas une condition agréable, mais la certitude est > >> >> absurde. > >> >> > >> >> > >> >> ------------------------------------ > >> >> > >> >> Yahoo! Groups Links > >> >> > >> >> > >> >> > >> > > >> > > >> > >> > >> > >> -- > >> Le doute n'est pas une condition agréable, mais la certitude est absurde. > > > > > > -- > Le doute n'est pas une condition agréable, mais la certitude est absurde. >
Is it reasonable for a resource to return representations that vary based on the authenticated user? Eg: http://example.com/my-tasks vs http://example.com/tasks?user=joe We've consider offering both and having my-tasks redirect to the appropriate user= resource. This helps caching, but at the cost of an extra network round trip. It also creates a potential for a direct object reference vulnerability, as I can't trust URIs that come from joe to always say user=joe as joe my get frisky and hand edit it to be user=jane. So I have to check the auth'd user vs the URI parameter anyway. Though I might actually want managers to be able to use the user= form on employees that report up to them. The my-tasks URI has a huge advantage that it's static, so I can give it to users from static content, or more importantly from external or non-authenticated content where the server that delivers the URI doesn't know I call him joe. The redirect strategy also works for this. Aside from breaking caching, is there anything wrong with just returning joe's task when he hits my-tasks ? This contemplates a resource that responds to GET with representations that depend on the session token from a cookie or a header. My thoughts on the redirect are that it's STILL a user dependent resource, and that unless I make the user type "joe" in a form, whatever page gives him the user=joe link is probably also user dependent. I suppose I could even save caching by using a rewrite rule instead of a redirect.
I've approached this in the following ways: 1) use dedicated resource : /my-tasks/joe, /joe/my-tasks, etc. Advantage here is the exact resource is easy to find, cache, and bookmark. Downside is the server must compose this and caches need to keep track of minor changes (add, edit, remove, tasks) in the resource (usually via ETags). 2) use two resources : /my-tasks/ and include code on demand that uses transient state (i.e. cookie, authentication header, etc) to identify and load personalization data from another URI. Advantage here is there is a single resource that can be shared by all users of /my-tasks/ and a second (equally cache-able) resource that is specific to the logged-in user. Downside is that this is not easily book-markable since state is involved in retrieving the second resource. You can mitigate this by using an extra URI as a bookmark (/bookmark?/my-tasks&user=joe, etc.) or store the transient state in the URI using a hashtag (/my-tasks/#joe, etc.). Dev will need to make sure the code-on-demand also does the proper caching work (attend to hastags, track 304 responses, etc.) 3) have servers use a generic URI (/my-tasks/) plus transient data (cookie, authorization header, etc.) to compose the resource server-side and mark the resource w/ a "Vary" header that includes the transient state container to make sure caches keep the variants sorted. Advantage is it's easy to bookmark and does not require code-on-demand. Downside is a loss of visbility in general (the URI does not identify the resource anymore and possible unexpected results when sharing the link (will be harder for userA to share w/ userB as the server will step in and mung the resource representation). I usually use #2 unless scripting is dis-allowed/unavailable then i use #1. I rarely use #3. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2011 - Aug 18-20 http://restfest.org On Sun, Sep 4, 2011 at 15:41, bryan_w_taylor <bryan_w_taylor@...>wrote: > Is it reasonable for a resource to return representations that vary based > on the authenticated user? Eg: > > http://example.com/my-tasks > vs > http://example.com/tasks?user=joe > > We've consider offering both and having my-tasks redirect to the > appropriate user= resource. This helps caching, but at the cost of an extra > network round trip. It also creates a potential for a direct object > reference vulnerability, as I can't trust URIs that come from joe to always > say user=joe as joe my get frisky and hand edit it to be user=jane. So I > have to check the auth'd user vs the URI parameter anyway. Though I might > actually want managers to be able to use the user= form on employees that > report up to them. > > The my-tasks URI has a huge advantage that it's static, so I can give it to > users from static content, or more importantly from external or > non-authenticated content where the server that delivers the URI doesn't > know I call him joe. The redirect strategy also works for this. > > Aside from breaking caching, is there anything wrong with just returning > joe's task when he hits my-tasks ? This contemplates a resource that > responds to GET with representations that depend on the session token from a > cookie or a header. My thoughts on the redirect are that it's STILL a user > dependent resource, and that unless I make the user type "joe" in a form, > whatever page gives him the user=joe link is probably also user dependent. I > suppose I could even save caching by using a rewrite rule instead of a > redirect. > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Option 2 and 3 seem fairly similar differing only in where the logic to get user-specific-content happens… Old habits die hard, but my service oriented brain seems to think of 3 as the most intuitive option that I'd consider. Also it separates the non-functional aspects of the resource… in this case authentication and authorization from the functional aspect which in this example is to "GET" my-tasks. I see a couple of issues with having the user as part of the resource uri space - if the authentication/authorization resources are different from the my-tasks resources, perhaps the identity provider is facebook or google. How can one model a user integral to the resource space when we don't know how they can be represented in a different system. - to be able to protect the resources, regardless of what the user in the uri is, you'd have to have the standard authentication mechanisms cookies, headers etc. - we'd be tunneling authentication semantics in the uri. In a recent project that I worked on we did go with the third option, but I'd be curious why you prefer 1 and 2 over 3. Regards, Dilip Krishnan dilip.krishnan@... On Sep 4, 2011, at 4:33 PM, mike amundsen wrote: > > > I've approached this in the following ways: > > 1) use dedicated resource : /my-tasks/joe, /joe/my-tasks, etc. > > Advantage here is the exact resource is easy to find, cache, and bookmark. Downside is the server must compose this and caches need to keep track of minor changes (add, edit, remove, tasks) in the resource (usually via ETags). > > 2) use two resources : /my-tasks/ and include code on demand that uses transient state (i.e. cookie, authentication header, etc) to identify and load personalization data from another URI. > > Advantage here is there is a single resource that can be shared by all users of /my-tasks/ and a second (equally cache-able) resource that is specific to the logged-in user. Downside is that this is not easily book-markable since state is involved in retrieving the second resource. You can mitigate this by using an extra URI as a bookmark (/bookmark?/my-tasks&user=joe, etc.) or store the transient state in the URI using a hashtag (/my-tasks/#joe, etc.). Dev will need to make sure the code-on-demand also does the proper caching work (attend to hastags, track 304 responses, etc.) > > 3) have servers use a generic URI (/my-tasks/) plus transient data (cookie, authorization header, etc.) to compose the resource server-side and mark the resource w/ a "Vary" header that includes the transient state container to make sure caches keep the variants sorted. Advantage is it's easy to bookmark and does not require code-on-demand. Downside is a loss of visbility in general (the URI does not identify the resource anymore and possible unexpected results when sharing the link (will be harder for userA to share w/ userB as the server will step in and mung the resource representation). > > I usually use #2 unless scripting is dis-allowed/unavailable then i use #1. I rarely use #3. > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2011 - Aug 18-20 > http://restfest.org > > > > On Sun, Sep 4, 2011 at 15:41, bryan_w_taylor <bryan_w_taylor@...> wrote: > Is it reasonable for a resource to return representations that vary based on the authenticated user? Eg: > > http://example.com/my-tasks > vs > http://example.com/tasks?user=joe > > We've consider offering both and having my-tasks redirect to the appropriate user= resource. This helps caching, but at the cost of an extra network round trip. It also creates a potential for a direct object reference vulnerability, as I can't trust URIs that come from joe to always say user=joe as joe my get frisky and hand edit it to be user=jane. So I have to check the auth'd user vs the URI parameter anyway. Though I might actually want managers to be able to use the user= form on employees that report up to them. > > The my-tasks URI has a huge advantage that it's static, so I can give it to users from static content, or more importantly from external or non-authenticated content where the server that delivers the URI doesn't know I call him joe. The redirect strategy also works for this. > > Aside from breaking caching, is there anything wrong with just returning joe's task when he hits my-tasks ? This contemplates a resource that responds to GET with representations that depend on the session token from a cookie or a header. My thoughts on the redirect are that it's STILL a user dependent resource, and that unless I make the user type "joe" in a form, whatever page gives him the user=joe link is probably also user dependent. I suppose I could even save caching by using a rewrite rule instead of a redirect. > > > > > ------------------------------------ > > Yahoo! Groups Links > > > > > > > >
As I mentioned in the post, in option #3, the URI no longer accurately identifies the resource representation. The same URI now returns multiple resource representations. This is the only example I supplied where this is true. "- to be able to protect the resources, regardless of what the user in the uri is, you'd have to have the standard authentication mechanisms cookies, headers etc." While this is correct, there is no requirement that this standard authentication data MUST be used to override the URI of a request to alter the response representation. I prefer to keep authentication details orthogonal to the resource identifer; my use of the characters "joe" to not at all assume that a user's log in data includes the characters "joe" (i.e. "joe" need not be the username for #1 to work properly). "- we'd be tunneling authentication semantics in the uri." Possibly you assume that the appearance of the three characters "j-o-e" mean that the URI _contains_ identity information. This need not be the case at all. The URI could contain any set of characters and I would perfer it as long as that set of chracters resulted in a _unique_ URI (not necessarily at _descriptive_ URI). Does that last point make sense? IOW, perfer #1 since it results in a URI that is unique to each resource representation. I prefer #2 since it results in two resources, each unique to each representation. I do not prefer #3 because it results in a single URI that returns multiple representations; the URI is no longer unique per representation returned. #3 is the only example where the URI is re-used to return multiple representations; it is needless tunneling. While it is true that some of the control data (headers) mark the record as unique, this is less preferable (to me) as there are other options that result in unique URIs for each representation. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2011 - Aug 18-20 http://restfest.org On Sun, Sep 4, 2011 at 22:55, Dilip Krishnan <dilip.krishnan@...>wrote: > Option 2 and 3 seem fairly similar differing only in where the logic to get > user-specific-content happens… Old habits die hard, but my service oriented > brain seems to think of 3 as the most intuitive option that I'd consider. > Also it separates the non-functional aspects of the resource… in this case > authentication and authorization from the functional aspect which in this > example is to "GET" my-tasks. > > I see a couple of issues with having the user as part of the resource uri > space > - if the authentication/authorization resources are different from the > my-tasks resources, perhaps the identity provider is facebook or google. How > can one model a user integral to the resource space when we don't know how > they can be represented in a different system. > - to be able to protect the resources, regardless of what the user in the > uri is, you'd have to have the standard authentication mechanisms cookies, > headers etc. > - we'd be tunneling authentication semantics in the uri. > > In a recent project that I worked on we did go with the third option, but > I'd be curious why you prefer 1 and 2 over 3. > > Regards, > Dilip Krishnan > dilip.krishnan@... > > > > On Sep 4, 2011, at 4:33 PM, mike amundsen wrote: > > > > I've approached this in the following ways: > > 1) use dedicated resource : /my-tasks/joe, /joe/my-tasks, etc. > > Advantage here is the exact resource is easy to find, cache, and bookmark. > Downside is the server must compose this and caches need to keep track of > minor changes (add, edit, remove, tasks) in the resource (usually via > ETags). > > 2) use two resources : /my-tasks/ and include code on demand that uses > transient state (i.e. cookie, authentication header, etc) to identify and > load personalization data from another URI. > > Advantage here is there is a single resource that can be shared by all > users of /my-tasks/ and a second (equally cache-able) resource that is > specific to the logged-in user. Downside is that this is not easily > book-markable since state is involved in retrieving the second resource. You > can mitigate this by using an extra URI as a bookmark > (/bookmark?/my-tasks&user=joe, etc.) or store the transient state in the URI > using a hashtag (/my-tasks/#joe, etc.). Dev will need to make sure the > code-on-demand also does the proper caching work (attend to hastags, track > 304 responses, etc.) > > 3) have servers use a generic URI (/my-tasks/) plus transient data (cookie, > authorization header, etc.) to compose the resource server-side and mark the > resource w/ a "Vary" header that includes the transient state container to > make sure caches keep the variants sorted. Advantage is it's easy to > bookmark and does not require code-on-demand. Downside is a loss of > visbility in general (the URI does not identify the resource anymore and > possible unexpected results when sharing the link (will be harder for userA > to share w/ userB as the server will step in and mung the resource > representation). > > I usually use #2 unless scripting is dis-allowed/unavailable then i use #1. > I rarely use #3. > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2011 - Aug 18-20 > http://restfest.org > > > > On Sun, Sep 4, 2011 at 15:41, bryan_w_taylor <bryan_w_taylor@...> > wrote: > >> Is it reasonable for a resource to return representations that vary based >> on the authenticated user? Eg: >> >> http://example.com/my-tasks >> vs >> http://example.com/tasks?user=joe >> >> We've consider offering both and having my-tasks redirect to the >> appropriate user= resource. This helps caching, but at the cost of an extra >> network round trip. It also creates a potential for a direct object >> reference vulnerability, as I can't trust URIs that come from joe to always >> say user=joe as joe my get frisky and hand edit it to be user=jane. So I >> have to check the auth'd user vs the URI parameter anyway. Though I might >> actually want managers to be able to use the user= form on employees that >> report up to them. >> >> The my-tasks URI has a huge advantage that it's static, so I can give it >> to users from static content, or more importantly from external or >> non-authenticated content where the server that delivers the URI doesn't >> know I call him joe. The redirect strategy also works for this. >> >> Aside from breaking caching, is there anything wrong with just returning >> joe's task when he hits my-tasks ? This contemplates a resource that >> responds to GET with representations that depend on the session token from a >> cookie or a header. My thoughts on the redirect are that it's STILL a user >> dependent resource, and that unless I make the user type "joe" in a form, >> whatever page gives him the user=joe link is probably also user dependent. I >> suppose I could even save caching by using a rewrite rule instead of a >> redirect. >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > > > > > > >
Great breakdown. My situation doesn't lend itself to code on demand, unfortunately. I like idea #3 of /my-tasks just being a local composition with the right Vary header. I don't have a problem if the URI doesn't identify the representation. A URI should identify the resource, not the representation. I don't see this as any different than /blogs/latest . The representation for /my-tasks could do a link with rel=bookmark to /tasks/joe if you need a permalink. --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > As I mentioned in the post, in option #3, the URI no longer accurately > identifies the resource representation. The same URI now returns multiple > resource representations. This is the only example I supplied where this is > true. > > "- to be able to protect the resources, regardless of what the user in the > uri is, you'd have to have the standard authentication mechanisms cookies, > headers etc." > While this is correct, there is no requirement that this standard > authentication data MUST be used to override the URI of a request to alter > the response representation. I prefer to keep authentication > details orthogonal to the resource identifer; my use of the characters "joe" > to not at all assume that a user's log in data includes the characters "joe" > (i.e. "joe" need not be the username for #1 to work properly). > > "- we'd be tunneling authentication semantics in the uri." > Possibly you assume that the appearance of the three characters "j-o-e" mean > that the URI _contains_ identity information. This need not be the case at > all. The URI could contain any set of characters and I would perfer it as > long as that set of chracters resulted in a _unique_ URI (not necessarily at > _descriptive_ URI). > > Does that last point make sense? IOW, perfer #1 since it results in a URI > that is unique to each resource representation. I prefer #2 since it results > in two resources, each unique to each representation. I do not prefer #3 > because it results in a single URI that returns multiple representations; > the URI is no longer unique per representation returned. > > #3 is the only example where the URI is re-used to return multiple > representations; it is needless tunneling. While it is true that some of the > control data (headers) mark the record as unique, this is less preferable > (to me) as there are other options that result in unique URIs for each > representation. > > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2011 - Aug 18-20 > http://restfest.org > > > On Sun, Sep 4, 2011 at 22:55, Dilip Krishnan <dilip.krishnan@...>wrote: > > > Option 2 and 3 seem fairly similar differing only in where the logic to get > > user-specific-content happens… Old habits die hard, but my service oriented > > brain seems to think of 3 as the most intuitive option that I'd consider. > > Also it separates the non-functional aspects of the resource… in this case > > authentication and authorization from the functional aspect which in this > > example is to "GET" my-tasks. > > > > I see a couple of issues with having the user as part of the resource uri > > space > > - if the authentication/authorization resources are different from the > > my-tasks resources, perhaps the identity provider is facebook or google. How > > can one model a user integral to the resource space when we don't know how > > they can be represented in a different system. > > - to be able to protect the resources, regardless of what the user in the > > uri is, you'd have to have the standard authentication mechanisms cookies, > > headers etc. > > - we'd be tunneling authentication semantics in the uri. > > > > In a recent project that I worked on we did go with the third option, but > > I'd be curious why you prefer 1 and 2 over 3. > > > > Regards, > > Dilip Krishnan > > dilip.krishnan@... > > > > > > > > On Sep 4, 2011, at 4:33 PM, mike amundsen wrote: > > > > > > > > I've approached this in the following ways: > > > > 1) use dedicated resource : /my-tasks/joe, /joe/my-tasks, etc. > > > > Advantage here is the exact resource is easy to find, cache, and bookmark. > > Downside is the server must compose this and caches need to keep track of > > minor changes (add, edit, remove, tasks) in the resource (usually via > > ETags). > > > > 2) use two resources : /my-tasks/ and include code on demand that uses > > transient state (i.e. cookie, authentication header, etc) to identify and > > load personalization data from another URI. > > > > Advantage here is there is a single resource that can be shared by all > > users of /my-tasks/ and a second (equally cache-able) resource that is > > specific to the logged-in user. Downside is that this is not easily > > book-markable since state is involved in retrieving the second resource. You > > can mitigate this by using an extra URI as a bookmark > > (/bookmark?/my-tasks&user=joe, etc.) or store the transient state in the URI > > using a hashtag (/my-tasks/#joe, etc.). Dev will need to make sure the > > code-on-demand also does the proper caching work (attend to hastags, track > > 304 responses, etc.) > > > > 3) have servers use a generic URI (/my-tasks/) plus transient data (cookie, > > authorization header, etc.) to compose the resource server-side and mark the > > resource w/ a "Vary" header that includes the transient state container to > > make sure caches keep the variants sorted. Advantage is it's easy to > > bookmark and does not require code-on-demand. Downside is a loss of > > visbility in general (the URI does not identify the resource anymore and > > possible unexpected results when sharing the link (will be harder for userA > > to share w/ userB as the server will step in and mung the resource > > representation). > > > > I usually use #2 unless scripting is dis-allowed/unavailable then i use #1. > > I rarely use #3. > > > > mca > > http://amundsen.com/blog/ > > http://twitter.com@mamund > > http://mamund.com/foaf.rdf#me > > > > > > #RESTFest 2011 - Aug 18-20 > > http://restfest.org > > > > > > > > On Sun, Sep 4, 2011 at 15:41, bryan_w_taylor <bryan_w_taylor@...> > > wrote: > > > >> Is it reasonable for a resource to return representations that vary based > >> on the authenticated user? Eg: > >> > >> http://example.com/my-tasks > >> vs > >> http://example.com/tasks?user=joe > >> > >> We've consider offering both and having my-tasks redirect to the > >> appropriate user= resource. This helps caching, but at the cost of an extra > >> network round trip. It also creates a potential for a direct object > >> reference vulnerability, as I can't trust URIs that come from joe to always > >> say user=joe as joe my get frisky and hand edit it to be user=jane. So I > >> have to check the auth'd user vs the URI parameter anyway. Though I might > >> actually want managers to be able to use the user= form on employees that > >> report up to them. > >> > >> The my-tasks URI has a huge advantage that it's static, so I can give it > >> to users from static content, or more importantly from external or > >> non-authenticated content where the server that delivers the URI doesn't > >> know I call him joe. The redirect strategy also works for this. > >> > >> Aside from breaking caching, is there anything wrong with just returning > >> joe's task when he hits my-tasks ? This contemplates a resource that > >> responds to GET with representations that depend on the session token from a > >> cookie or a header. My thoughts on the redirect are that it's STILL a user > >> dependent resource, and that unless I make the user type "joe" in a form, > >> whatever page gives him the user=joe link is probably also user dependent. I > >> suppose I could even save caching by using a rewrite rule instead of a > >> redirect. > >> > >> > >> > >> > >> ------------------------------------ > >> > >> Yahoo! Groups Links > >> > >> > >> > >> > > > > > > > > > > > > > > >
I had another insight on this and thought I'd share. BTW, Erik I loved your post, sorry to take so long to say so. Your arguments about the supposed cache inefficiency of mark based pagination really made me happy. Everybody reads full pages on the same natural boundaries all the way up until they are where the action is, and then they are all pounding the same handful of pages as everybody else. It probably helps performance for them to all poll more frequently while varnish yawns at them. Love it. I get some developers who complain that they want diffs so they can see exactly which fields changed. Why they think it's the servers job to do that with shared resources is something I've never figured out. Anyway, I realized each atom entry can, with one efficient query, look up the previous change to the same entity and render a link to it with rel="predecessor-version". From any entry, you can scroll back through the previous versions of the underlying entity.
On Mon, Sep 5, 2011 at 4:30 AM, mike amundsen <mamund@...> wrote: > > > As I mentioned in the post, in option #3, the URI no longer accurately > identifies the resource representation. The same URI now returns multiple > resource representations. This is the only example I supplied where this is > true. > > "- to be able to protect the resources, regardless of what the user in the > uri is, you'd have to have the standard authentication mechanisms cookies, > headers etc." > While this is correct, there is no requirement that this standard > authentication data MUST be used to override the URI of a request to alter > the response representation. I prefer to keep authentication > details orthogonal to the resource identifer; my use of the characters "joe" > to not at all assume that a user's log in data includes the characters "joe" > (i.e. "joe" need not be the username for #1 to work properly). > > "- we'd be tunneling authentication semantics in the uri." > Possibly you assume that the appearance of the three characters "j-o-e" > mean that the URI _contains_ identity information. This need not be the case > at all. The URI could contain any set of characters and I would perfer it as > long as that set of chracters resulted in a _unique_ URI (not necessarily at > _descriptive_ URI). > > Does that last point make sense? IOW, perfer #1 since it results in a URI > that is unique to each resource representation. I prefer #2 since it results > in two resources, each unique to each representation. I do not prefer #3 > because it results in a single URI that returns multiple representations; > the URI is no longer unique per representation returned. > > #3 is the only example where the URI is re-used to return multiple > representations; it is needless tunneling. While it is true that some of the > control data (headers) mark the record as unique, this is less preferable > (to me) as there are other options that result in unique URIs for each > representation. > > If it's a private resource it's probably not intermediary cacheable anyway, since there will likely be auth involved on the server side. Visibility is going to suffer whatever happens, for me it's safest to stick to HTTP idioms as much as possible because it keeps the complexity 'on the network'. Here's two other options, I'm not sure where they fit in to the above from mike (apologies if they overlap): - Return user-specific response from /my-tasks; mark it as privately cacheable and include Content-Location linking to a specific resource (could make use of the html5 history API to reflect the more specific URI in the browser location bar) - Return redirect from /my-tasks; mark is as privately cacheable with a longish max-age (this will prevent the subsequent 'round tripping' associated with the redirect method) Cheers, Mike
On Mon, Sep 5, 2011 at 10:14 AM, bryan_w_taylor <bryan_w_taylor@...>wrote: > ** > > I had another insight on this and thought I'd share. BTW, Erik I loved your > post [...] > Thanks :-) That just made my day! > I get some developers who complain that they want diffs so they can see > exactly which fields changed. Why they think it's the servers job to do that > with shared resources is something I've never figured out. Anyway, I > realized each atom entry can, with one efficient query, look up the previous > change to the same entity and render a link to it with > rel="predecessor-version". From any entry, you can scroll back through the > previous versions of the underlying entity. > > Ah, you mean a linked list that contains a subset of the same items as in the original list. Take care you don't violate the patent on linked lists: http://www.google.com/patents/about/7028023_Linked_list.html?id=Szh4AAAAEBAJ But yes, I agree that making it possible to browse the history is beneficial. I would suggest you look into the rel="history" I-D here: http://tools.ietf.org/html/draft-snell-atompub-revision-00 It's an old draft (2006), but we use it to provide old versions of documents. We also remove (i.e. unlink) old versions of documents from the main list, so that clients that need to play catchup only see the documents that have changed, and not every single change. -- -mogsie-
> > As I mentioned in the post, in option #3, the URI no longer accurately identifies the resource representation. The same URI now returns multiple resource representations. This is the only example I supplied where this is true. Agreed, I see your point about the uri having different representations based context. Couple of observations - What if we have resources categorized into taxonomies for e.g. business units (/us/texas/sales) and these have authorization rules. How does model this? especially given that taxonomies notoriously change. - This brings up a question on reliance on uri design. Seems like a smell to me when this happens. > > Possibly you assume that the appearance of the three characters "j-o-e" mean that the URI _contains_ identity information. This need not be the case at all. The URI could contain any set of characters and I would perfer it as long as that set of chracters resulted in a _unique_ URI (not necessarily at _descriptive_ URI). While I did assume that j-o-e in the uri contains identity, I see this as some form of possibly a unique uri in option #1 or #2. > If it's a private resource it's probably not intermediary cacheable anyway, since there will likely be auth involved on the server side. Visibility is going to suffer whatever happens, for me it's safest to stick to HTTP idioms as much as possible because it keeps the complexity 'on the network'. It seems like in addition to visibility, the server side needs to protect the resources regardless of the option we pick. > - Return user-specific response from /my-tasks; mark it as privately cacheable and include Content-Location linking to a specific resource (could make use of the html5 history API to reflect the more specific URI in the browser location bar) > - Return redirect from /my-tasks; mark is as privately cacheable with a longish max-age (this will prevent the subsequent 'round tripping' associated with the redirect method) Interesting ideas, it seems like a good compromise. I think all of this brings up the general question on how does one model authentication and authorization without relying on security by obscurity to protect "private" resources. What is typically considered good form in these kinds of situations; especially in situations where resources need to rely on open id/oauth providers external to the system in question. Regards, Dilip Krishnan dilip.krishnan@...
Dilip: seems like your comments are shifting from caching and authentication issues to authorization issues. this stuff is outside the boundaries of Fielding's REST model, but still (to me) interesting. i'll focus on those items for now. 1) my preference for unique URIs has nothing to do w/ security (authN or authZ). I prefer unique URIs in order to keep clear separation of resource representations primarily for improved caching accuracy. Even in anonymous requests, I still strive to use unique URIs and to prevent tunneling of multiple content payloads through a single URI. 2) "What if we have resources categorized into taxonomies for e.g. business units (/us/texas/sales) and these have authorization rules. How does model this? especially given that taxonomies notoriously change." If i understand your Q here, you're asking about how servers can map authorization details to content once the request's identify has been established. I usually employ a rather simple authorization model: the URI alone. I usually map URIs to protocol actions (GET, PUT, POST, DELETE, etc.) and apply that "permission" to a user identify (or group of them). While it is helpful to use URIs that make applying authZ rules easy, that's a server-side convenience that clients need not know about. On the implementation detail side, applying a regexp to a URI works quite well for me when checking authZ details. This works with a wide range of URIs, whether they have easily discernible patterns or not. keeping the authZ de-coupled from the actual content *and* using an algorithm for checking the identifiers (URIs) means future modifications to the URI name space have a limited impact on the overall system. also, while it is possible invent URI namespaces that defy application of algorithms like regexp in order to preform authZ, i stay away from adopting these perverse cases. finally, since URIs are opaque to clients, when i stumble upon a case where the URIs are too difficult to work with (for any number of reasons) i simply change them and make the needed modifications to caching and redirection rules to help existing client apps make the transition. since hypermedia clients do not "code to the URIs" but instead rely on hypermedia affordances within responses, this approach works well. 3) "While I did assume that j-o-e in the uri contains identity, I see this as some form of possibly a unique uri<http://en.wikipedia.org/wiki/Security_through_obscurity> in option #1 or #2." FWIW, there is nothing in "obscurity" that i find useful in this case. my selection of URIs are often de-coupled from shared user identity to protect user privacy, not improve security on the network. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2011 - Aug 18-20 http://restfest.org On Mon, Sep 5, 2011 at 22:38, Dilip Krishnan <dilip.krishnan@...>wrote: > >> As I mentioned in the post, in option #3, the URI no longer accurately >> identifies the resource representation. The same URI now returns multiple >> resource representations. This is the only example I supplied where this is >> true. >> > > Agreed, I see your point about the uri having different representations > based context. Couple of observations > > - What if we have resources categorized into taxonomies for e.g. business > units (/us/texas/sales) and these have authorization rules. How does model > this? especially given that taxonomies notoriously change. > - This brings up a question on reliance on uri design. Seems like a smell > to me when this happens. > > >> Possibly you assume that the appearance of the three characters "j-o-e" >> mean that the URI _contains_ identity information. This need not be the case >> at all. The URI could contain any set of characters and I would perfer it as >> long as that set of chracters resulted in a _unique_ URI (not necessarily at >> _descriptive_ URI). >> > > While I did assume that j-o-e in the uri contains identity, I see this as > some form of possibly a unique uri<http://en.wikipedia.org/wiki/Security_through_obscurity> in > option #1 or #2. > > > If it's a private resource it's probably not intermediary cacheable anyway, > since there will likely be auth involved on the server side. Visibility is > going to suffer whatever happens, for me it's safest to stick to HTTP idioms > as much as possible because it keeps the complexity 'on the network'. > > > It seems like in addition to visibility, the server side needs to protect > the resources regardless of the option we pick. > > - Return user-specific response from /my-tasks; mark it as privately > cacheable and include Content-Location linking to a specific resource (could > make use of the html5 history API to reflect the more specific URI in the > browser location bar) > > - Return redirect from /my-tasks; mark is as privately cacheable with a > longish max-age (this will prevent the subsequent 'round tripping' > associated with the redirect method) > > > Interesting ideas, it seems like a good compromise. I think all of this > brings up the general question on how does one model authentication and > authorization without relying on security by obscurity to protect "private" > resources. What is typically considered good form in these kinds of > situations; especially in situations where resources need to rely on open > id/oauth providers external to the system in question. > > > Regards, > Dilip Krishnan > dilip.krishnan@... > >
Evening all! I'm a new member to this list and new to REST in general. I recently read Roy's dissertation and blogged my interpretation of it. I'm always striving to learn and improve and I would appreciate getting this lists expertise feedback on my post. You can find it here: http://kellabyte.com/2011/09/04/clarifying-rest/ Thanks so much in advance! Kelly Sommers Blog: http://kellabyte.com Twitter: http://twitter.com/kellabyte
Welcome Kelly, great to see you here. I enjoyed the post [though I am biased]. Hypermedia is definitely one of the areas where there is low awareness. Your twitter example is very simple but that combined with the atom example illustrates the benefit and I think will help people to take a second glance. Glenn On Mon, Sep 5, 2011 at 5:33 PM, kellsommers <kell.sommers@gmail.com> wrote: > ** > > > Evening all! > > I'm a new member to this list and new to REST in general. I recently read > Roy's dissertation and blogged my interpretation of it. > > I'm always striving to learn and improve and I would appreciate getting > this lists expertise feedback on my post. > > You can find it here: > http://kellabyte.com/2011/09/04/clarifying-rest/ > > Thanks so much in advance! > Kelly Sommers > > Blog: http://kellabyte.com > Twitter: http://twitter.com/kellabyte > > >
> seems like your comments are shifting from caching and authentication issues to authorization issues. this stuff is outside the boundaries of Fielding's REST model, but still (to me) interesting. i'll focus on those items for now. Guilty as charged :) … just trying to get all the problems related to single uri's having multiple representations and a special case includes authz as well. > If i understand your Q here, you're asking about how servers can map authorization details to content once the request's identify has been established. I usually employ a rather simple authorization model: the URI alone. I usually map URIs to protocol actions (GET, PUT, POST, DELETE, etc.) and apply that "permission" to a user identify (or group of them). May be it calls for a different thread of discussion but my question wasn't about how the authorization can be implemented but more related to uri design. For example we could have sales by region or sales by business unit. i.e. /cpg/us/tx/sales for cpg sales in us tx or /us/tx/cpg for cpg sales in us tx So my question was, since the taxonomy is subject to change depending on how the organization wants to use the resources how does one design resource uri to accommodate the change. I believe these should be hypermedia driven to shield the client from changing taxonomies… but the convenience and pragmatism of human readable uri's is compelling from a pragmatic perspective. > While it is helpful to use URIs that make applying authZ rules easy, that's a server-side convenience that clients need not know about. On the implementation detail side, applying a regexp to a URI works quite well for me when checking authZ details. This works with a wide range of URIs, whether they have easily discernible patterns or not. keeping the authZ de-coupled from the actual content *and* using an algorithm for checking the identifiers (URIs) means future modifications to the URI name space have a limited impact on the overall system. … which is similar to what we do, but I was wondering if there are other ways people are solving this problem. > > Regards, > Dilip Krishnan > dilip.krishnan@... > >
"since the taxonomy is subject to change depending on how the organization wants to use the resources how does one design resource uri to accommodate the change. " i am at a loss here. i must confess i spend very little time "designing URIs." second, it's not clear to me how one would "design resource uri to accommodate [the] change." it's also not clear to me if you are approaching URIs from the client perspective (where URIs should be treated as opaque) or from the server perspective (where URIs are identifiers that usually power routing code in order to locate the proper function/content within private server components and data storage). hopefully someone else on the list can contribute to this topic. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2011 - Aug 18-20 http://restfest.org On Tue, Sep 6, 2011 at 01:01, Dilip Krishnan <dilip.krishnan@...>wrote: > seems like your comments are shifting from caching and authentication > issues to authorization issues. this stuff is outside the boundaries of > Fielding's REST model, but still (to me) interesting. i'll focus on those > items for now. > > > Guilty as charged :) … just trying to get all the problems related to > single uri's having multiple representations and a special case includes > authz as well. > > If i understand your Q here, you're asking about how servers can map > authorization details to content once the request's identify has been > established. I usually employ a rather simple authorization model: the URI > alone. I usually map URIs to protocol actions (GET, PUT, POST, DELETE, etc.) > and apply that "permission" to a user identify (or group of them). > > > May be it calls for a different thread of discussion but my question wasn't > about how the authorization can be implemented but more related to uri > design. For example we could have sales by region or sales by business > unit. > > i.e. /cpg/us/tx/sales for cpg sales in us tx > or /us/tx/cpg for cpg sales in us tx > > So my question was, since the taxonomy is subject to change depending on > how the organization wants to use the resources how does one design resource > uri to accommodate the change. I believe these should be hypermedia driven > to shield the client from changing taxonomies… but the convenience and > pragmatism of human readable uri's is compelling from a pragmatic > perspective. > > While it is helpful to use URIs that make applying authZ rules easy, that's > a server-side convenience that clients need not know about. On the > implementation detail side, applying a regexp to a URI works quite well for > me when checking authZ details. This works with a wide range of URIs, > whether they have easily discernible patterns or not. keeping the authZ > de-coupled from the actual content *and* using an algorithm for checking the > identifiers (URIs) means future modifications to the URI name space have a > limited impact on the overall system. > > > … which is similar to what we do, but I was wondering if there are other > ways people are solving this problem. > > > Regards, >> Dilip Krishnan >> dilip.krishnan@... >> >> > >
As Mike says: URI's are opaque from the client perspective. So any URI scheme would do. For instance http://hostname/organization?id=1234 (replace 1234 with any ID of your organization resources) What you need is links in your representation. Assume you have a top-organizational unit at the well-known URI http://hostname/organization (no id supplied). At this URI you get HTML describing that particular organizational unit. Included in the HTML you also get links to the related units: <a href="http://hostname/organization?id=5" rel="child">Unit X</a> <a href="http://hostname/organization?id=12" rel="child">Unit Y</a> Now you can let your client browse through your organization chart by following links. You can also include standard HTML forms for searching for units. Here you can have a text input where someone can enter the organizational path to a unit, and get a URI back that points to the exact unit. No need for special URI schemes :-) /Jørn --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > "since the taxonomy is subject to change depending on how the organization > wants to use the resources how does one design resource uri to accommodate > the change. " > > i am at a loss here. i must confess i spend very little time "designing > URIs." second, it's not clear to me how one would "design resource uri to > accommodate [the] change." > > it's also not clear to me if you are approaching URIs from the client > perspective (where URIs should be treated as opaque) or from the server > perspective (where URIs are identifiers that usually power routing code in > order to locate the proper function/content within private server > components and data storage). > > hopefully someone else on the list can contribute to this topic. > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > #RESTFest 2011 - Aug 18-20 > http://restfest.org > > > On Tue, Sep 6, 2011 at 01:01, Dilip Krishnan <dilip.krishnan@...>wrote: > > > seems like your comments are shifting from caching and authentication > > issues to authorization issues. this stuff is outside the boundaries of > > Fielding's REST model, but still (to me) interesting. i'll focus on those > > items for now. > > > > > > Guilty as charged :) … just trying to get all the problems related to > > single uri's having multiple representations and a special case includes > > authz as well. > > > > If i understand your Q here, you're asking about how servers can map > > authorization details to content once the request's identify has been > > established. I usually employ a rather simple authorization model: the URI > > alone. I usually map URIs to protocol actions (GET, PUT, POST, DELETE, etc.) > > and apply that "permission" to a user identify (or group of them). > > > > > > May be it calls for a different thread of discussion but my question wasn't > > about how the authorization can be implemented but more related to uri > > design. For example we could have sales by region or sales by business > > unit. > > > > i.e. /cpg/us/tx/sales for cpg sales in us tx > > or /us/tx/cpg for cpg sales in us tx > > > > So my question was, since the taxonomy is subject to change depending on > > how the organization wants to use the resources how does one design resource > > uri to accommodate the change. I believe these should be hypermedia driven > > to shield the client from changing taxonomies… but the convenience and > > pragmatism of human readable uri's is compelling from a pragmatic > > perspective. > > > > While it is helpful to use URIs that make applying authZ rules easy, that's > > a server-side convenience that clients need not know about. On the > > implementation detail side, applying a regexp to a URI works quite well for > > me when checking authZ details. This works with a wide range of URIs, > > whether they have easily discernible patterns or not. keeping the authZ > > de-coupled from the actual content *and* using an algorithm for checking the > > identifiers (URIs) means future modifications to the URI name space have a > > limited impact on the overall system. > > > > > > … which is similar to what we do, but I was wondering if there are other > > ways people are solving this problem. > > > > > > Regards, > >> Dilip Krishnan > >> dilip.krishnan@... > >> > >> > > > > >
Hi Kelly, good to have you here. On Sep 6, 2011, at 2:33 AM, kellsommers wrote: > Evening all! > > I'm a new member to this list and new to REST in general. I recently read Roy's dissertation and blogged my interpretation of it. I saw that yesterday - looks like you picked a good start and had people pointing you in the right directions and to the right resources. Regarding the Twitter example: The important thing that is missing from the Twitter API is the definition and the use of a media type that would make the message self describing. Without that, the client needs to rely on out-of-band knowledge about the format that Twitter will send[1]. Effectively, you need to bake this Twitter-specific knowledge into the client side code. That's the coupling the hypermedia constraint aims to remove. > > I'm always striving to learn and improve and I would appreciate getting this lists expertise feedback on my post. > You might find the document at [1] helpful when comparing RESTfulness of APIs. Jan [1] http://www.nordsc.com/ext/classification_of_http_based_apis.html#http-type-one > You can find it here: > http://kellabyte.com/2011/09/04/clarifying-rest/ > > Thanks so much in advance! > Kelly Sommers > > Blog: http://kellabyte.com > Twitter: http://twitter.com/kellabyte > >
Hi,    In [1] they quote "For idempotent requests having large amounts ofinput data (more than 4 KB in most current implementations) it isnot possible to encode such data in the resource URI, as the server will reject such “malformed†URIs". The paper dates from 2008 and I was wondering is this still the URI length restriction when issuing a GET request? The URI spec. does not impose a restriction but servers do - is 4kb the worst case restriction? In [2], the limit is 8Kb... Thanks, Sean. [1]C. Pautasso, O. Zimmermann, F. Leymann, “RESTful Web Services vs. ‘Big’ Web Services: Making the Right Architectural Decision†in Proceedings of the 17th World Wide Web Conference, pp 805-814, 2008. [2] S. Ruby and L. Richardson, “RESTful Web Servicesâ€, O’Reilly 2007
Hi Sean, On Sep 6, 2011, at 11:44 AM, Sean Kennedy wrote: > > Hi, > In [1] they quote "For idempotent requests having large amounts of input data (more than 4 KB in most current implementations) it is not possible to encode such data in the resource URI, as the server > will reject such “malformed” URIs". The paper dates from 2008 and I was wondering is this still the URI length restriction when issuing a GET request? The URI spec. does not impose a restriction but servers do - is 4kb the worst case restriction? In [2], the limit is 8Kb... Spec-wise there is no limit. But some software out there will only tolerate a certain length. Personally, I think that when you hit the 1k boundary with your URI length, you should adjust your strategy and do something differently. Usually this means to mint new resources with specific (domain significant) semantics to substitute specific query parameter combinations. E.g. /foo/bar/stock?kind=car&make=golf&model=GL&fromDate=1980&toDate=1990&status=used becomes /foo/bar/stock/used_1980ies_GolfGL Focussing on domain specific concepts also has the advantage of making them explicit instead of hiding them in arbitrary query dimensions. Jan > > Thanks, > Sean. > > [1] C. Pautasso, O. Zimmermann, F. Leymann, “RESTful Web Services vs. ‘Big’ Web Services: Making the Right Architectural Decision” in Proceedings of the 17th World Wide Web Conference, pp 805-814, 2008. > [2] S. Ruby and L. Richardson, “RESTful Web Services”, O’Reilly 2007 > >
On Sep 6, 2011, at 12:29 PM, Sean Kennedy wrote:
> Hi Jan,
> Thanks for that. I agree with what you have said. My research is aroung mapping SOAP WS to RESTful HTTP format. As you can imagine there are POST's in SOAP that can map to RESTful GETs (e.g. getCustomerDetails(custNo) mapping to /customers/{custId}. Hence my interest in the URi limit...From what you are saying, intelligent use of URI structure to covey information is (far) better than using long lists of query parameters..
Right. Any refactoring from a SOAP API to a REST API necessarily involves changing the API's overall conceptual approach. (Unless the SOAP API is already pretty close to HTTP's uniform interface, e.g. orderService.getOrder(123), orderService.getNewOrders(), orderService.submit(order), etc.(
Jan
>
> /Sean.
>
> From: Jan Algermissen <algermissen1971@...>
> To: Sean Kennedy <seandkennedy@yahoo.co.uk>
> Cc: Rest Discussion Group <rest-discuss@yahoogroups.com>
> Sent: Tuesday, 6 September 2011, 10:56
> Subject: Re: [rest-discuss] server URI length limit
>
> Hi Sean,
>
> On Sep 6, 2011, at 11:44 AM, Sean Kennedy wrote:
>
> >
> > Hi,
> > In [1] they quote "For idempotent requests having large amounts of input data (more than 4 KB in most current implementations) it is not possible to encode such data in the resource URI, as the server
> > will reject such “malformed” URIs". The paper dates from 2008 and I was wondering is this still the URI length restriction when issuing a GET request? The URI spec. does not impose a restriction but servers do - is 4kb the worst case restriction? In [2], the limit is 8Kb...
>
>
> Spec-wise there is no limit. But some software out there will only tolerate a certain length.
>
> Personally, I think that when you hit the 1k boundary with your URI length, you should adjust your strategy and do something differently.
>
> Usually this means to mint new resources with specific (domain significant) semantics to substitute specific query parameter combinations.
>
> E.g. /foo/bar/stock?kind=car&make=golf&model=GL&fromDate=1980&toDate=1990&status=used
>
> becomes
>
> /foo/bar/stock/used_1980ies_GolfGL
>
> Focussing on domain specific concepts also has the advantage of making them explicit instead of hiding them in arbitrary query dimensions.
>
> Jan
>
>
> >
> > Thanks,
> > Sean.
> >
> > [1] C. Pautasso, O. Zimmermann, F. Leymann, “RESTful Web Services vs. ‘Big’ Web Services: Making the Right Architectural Decision” in Proceedings of the 17th World Wide Web Conference, pp 805-814, 2008.
> > [2] S. Ruby and L. Richardson, “RESTful Web Services”, O’Reilly 2007
> >
> >
>
>
>
On 2011-09-06 11:44, Sean Kennedy wrote: > Hi, > In [1] they quote "For idempotent requests having large amounts ofinput > data (more than 4 KB in most current implementations) it isnot possible > to encode such data in the resource URI, as the server > will reject such “malformed†URIs". The paper dates from 2008 and I was > wondering is this still the URI length restriction when issuing a GET > request? The URI spec. does not impose a restriction but servers do - is > 4kb the worst case restriction? In [2], the limit is 8Kb... > ... <http://greenbytes.de/tech/webdav/draft-ietf-httpbis-p1-messaging-latest.html#rfc.section.3.1.1.2>: "Various ad-hoc limitations on request-target length are found in practice. It is RECOMMENDED that all HTTP senders and recipients support request-target lengths of 8000 or more octets." Best regards, Julian
As I re-read the example I came up with for authz; it was a poor example to justify that it might be useful to design uris on the server side :) I dint really have a question perse; but am curious how folks solve authz problem RESTFully. You identified an approach thats similar to what we do, but I was wondering if there are other ways of doing it. > > On Tue, Sep 6, 2011 at 01:01, Dilip Krishnan <dilip.krishnan@...> wrote: >> seems like your comments are shifting from caching and authentication issues to authorization issues. this stuff is outside the boundaries of Fielding's REST model, but still (to me) interesting. i'll focus on those items for now. > > Guilty as charged :) … just trying to get all the problems related to single uri's having multiple representations and a special case includes authz as well. > >> If i understand your Q here, you're asking about how servers can map authorization details to content once the request's identify has been established. I usually employ a rather simple authorization model: the URI alone. I usually map URIs to protocol actions (GET, PUT, POST, DELETE, etc.) and apply that "permission" to a user identify (or group of them). > > May be it calls for a different thread of discussion but my question wasn't about how the authorization can be implemented but more related to uri design. For example we could have sales by region or sales by business unit. > > i.e. /cpg/us/tx/sales for cpg sales in us tx > or /us/tx/cpg for cpg sales in us tx > > So my question was, since the taxonomy is subject to change depending on how the organization wants to use the resources how does one design resource uri to accommodate the change. I believe these should be hypermedia driven to shield the client from changing taxonomies… but the convenience and pragmatism of human readable uri's is compelling from a pragmatic perspective. > >> While it is helpful to use URIs that make applying authZ rules easy, that's a server-side convenience that clients need not know about. On the implementation detail side, applying a regexp to a URI works quite well for me when checking authZ details. This works with a wide range of URIs, whether they have easily discernible patterns or not. keeping the authZ de-coupled from the actual content *and* using an algorithm for checking the identifiers (URIs) means future modifications to the URI name space have a limited impact on the overall system. > > … which is similar to what we do, but I was wondering if there are other ways people are solving this problem. >> >> Regards, >> Dilip Krishnan >> dilip.krishnan@... >> >> > >
"I dint really have a question perse; but am curious how folks solve authz problem RESTFully. You identified an approach thats similar to what we do, but I was wondering if there are other ways of doing it." understood. i, too,, would like to hear from others on how they handle the implementation details of authZ in dist net apps. in particular i'd like to hear from anyone who uses external services for either authN and/or authZ and how that affects other parts of implementation (does it force changes in resource modeling on the server ? client-side implementation details related third-party auth, etc.) mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me #RESTFest 2011 - Aug 18-20 http://restfest.org On Tue, Sep 6, 2011 at 09:11, Dilip Krishnan <dilip.krishnan@...>wrote: > As I re-read the example I came up with for authz; it was a poor example to > justify that it might be useful to design uris on the server side :) > > I dint really have a question perse; but am curious how folks solve authz > problem RESTFully. You identified an approach thats similar to what we do, > but I was wondering if there are other ways of doing it. > > > > On Tue, Sep 6, 2011 at 01:01, Dilip Krishnan <dilip.krishnan@...>wrote: > >> seems like your comments are shifting from caching and authentication >> issues to authorization issues. this stuff is outside the boundaries of >> Fielding's REST model, but still (to me) interesting. i'll focus on those >> items for now. >> >> >> Guilty as charged :) … just trying to get all the problems related to >> single uri's having multiple representations and a special case includes >> authz as well. >> >> If i understand your Q here, you're asking about how servers can map >> authorization details to content once the request's identify has been >> established. I usually employ a rather simple authorization model: the URI >> alone. I usually map URIs to protocol actions (GET, PUT, POST, DELETE, etc.) >> and apply that "permission" to a user identify (or group of them). >> >> >> May be it calls for a different thread of discussion but my question >> wasn't about how the authorization can be implemented but more related to >> uri design. For example we could have sales by region or sales by business >> unit. >> >> i.e. /cpg/us/tx/sales for cpg sales in us tx >> or /us/tx/cpg for cpg sales in us tx >> >> So my question was, since the taxonomy is subject to change depending on >> how the organization wants to use the resources how does one design resource >> uri to accommodate the change. I believe these should be hypermedia driven >> to shield the client from changing taxonomies… but the convenience and >> pragmatism of human readable uri's is compelling from a pragmatic >> perspective. >> >> While it is helpful to use URIs that make applying authZ rules easy, >> that's a server-side convenience that clients need not know about. On the >> implementation detail side, applying a regexp to a URI works quite well for >> me when checking authZ details. This works with a wide range of URIs, >> whether they have easily discernible patterns or not. keeping the authZ >> de-coupled from the actual content *and* using an algorithm for checking the >> identifiers (URIs) means future modifications to the URI name space have a >> limited impact on the overall system. >> >> >> … which is similar to what we do, but I was wondering if there are other >> ways people are solving this problem. >> >> >> Regards, >>> Dilip Krishnan >>> dilip.krishnan@... >>> >>> >> >> > >
Hi Jan, On 6 September 2011 08:03, Jan Algermissen <algermissen1971@...> wrote: > You might find the document at [1] helpful when comparing RESTfulness of APIs. > > Jan > > [1] http://www.nordsc.com/ext/classification_of_http_based_apis.html#http-type-one This document is great! I'm currently working on an API that is roughly HTTP-based Type 1 that I'm trying to make more RESTful if I can, which is difficult due toHTTP-based Type 1 APIs being the easiest to design up front and easier to document and use (at least initially). It would be excellent to have some concrete examples of where the benefits of RESTful API evolution were realised and how RESTful clients coped with the changes - do you know if any exist? Good work! Cheers, Jim
Thanks for the discussion and reassurance. I'm hoping to press on with URIs as input references. For clarification, the URIs in question aren't built in clients - they will have been returned in previous responses, often in collection resources or search results. Cheers, Jim
We're hoping to solve this bootstrapping problem by returning initial URIs for "my stuff" during authentication. So, you get "authenticated-as": "http://someservice.com/users/42/" back when authenticating and then the "/users/42/" resource is just a normal RESTful resource connected to everything else it's related to via hyperlinks. Hopefully this will be the only place in the API where responses vary based on who's asking and everything else will be a web of RESTful resources that are the same regardless of who's asking and so highly cachable. Cheers, Jim
On Sep 6, 2011, at 3:21 PM, Jim Purbrick wrote: > Hi Jan, > > On 6 September 2011 08:03, Jan Algermissen <algermissen1971@...> wrote: > > You might find the document at [1] helpful when comparing RESTfulness of APIs. > > > > Jan > > > > [1] http://www.nordsc.com/ext/classification_of_http_based_apis.html#http-type-one > > This document is great! Thanks! > I'm currently working on an API that is > roughly HTTP-based Type 1 that I'm trying to make more RESTful if I > can, which is difficult due toHTTP-based Type 1 APIs being the easiest > to design up front and easier to document and use (at least > initially). > > It would be excellent to have some concrete examples of where the > benefits of RESTful API evolution were realised and how RESTful > clients coped with the changes - do you know if any exist? > Unfortunately, no. My experience is that nobody does REST in the sense that it can show the evolvability benefits (in my Enterprise-IT biased context this primarily means eliminating the need to sit in endless interface design workshops or for server-side devs to constantly wonder who and how seriously depends on the interface they are about to change). It is such a radical change not only to the architectural model but also to the software development process model that I suspect we'll need t be patient another couple of years. BUt fortunately the number of people that are working on it is constantly growing. Jan > Good work! > > Cheers, > > Jim >
On Tue, Sep 6, 2011 at 6:35 AM, Jim Purbrick <jimpurbrick@...> wrote: > ** > > > We're hoping to solve this bootstrapping problem by returning initial URIs > for "my stuff" during authentication. > > So, you get "authenticated-as": "http://someservice.com/users/42/" back > when authenticating and then the "/users/42/" resource is just a normal > RESTful resource connected to everything else it's related to via > hyperlinks. > > Hopefully this will be the only place in the API where responses vary based > on who's asking and everything else will be a web of RESTful resources that > are the same regardless of who's asking and so highly cachable. > > Cheers, > > Jim > > Authorization and resource discovery are two separate concerns. Because REST interactions are supposed to be stateless, you'll need a mechanism that validates *every* request, not just the "first" one. Lots of services I've seen use HTTP Basic for that (and run across SSL to avoid the password being visible to snoopers). Other options include an API key that has to be included in every request (although this is often used just to grant permission to use the service, not identify a particular user), or more involved authentication strategies like OpenID or OAuth. You use OAuth, for example, to interact with APIs to Facebook, SalesForce.Com, and LinkedIn. Your "return the http://someservice.com/users/42 resource" would be the right answer if the caller requested that URI, but not if they requested, say "http://somservice.com/customers/123". In the latter case, they should get what they asked for (if authorized and allowed to see it), a 403 (if authorized and not allowed to see it), or a 401 (if not authorized). A strategy I like for resource discovery is to have the very top resource in the URI space (http://someservice.com/") serve that purpose. I have found this to be the simplest to explain to potential client developers, and it makes intuitive sense that this is the "front door" (so to speak) to the entire service. SalesForce in particular employs a variant of this strategy that is also helpful for long term use -- part of their discovery resource is the supported versions of the API itself (with possibly different URIs for each), so a client can program against a particular version of the API without knowing ahead of time what the version's base URI will be, but knowing that they can find this out from the discovery resource (and cache it for some reasonable amount of time). Craig McClanahan > >
On Tue, Sep 6, 2011 at 9:35 AM, Jan Algermissen <algermissen1971@...>wrote: > > On Sep 6, 2011, at 3:21 PM, Jim Purbrick wrote: > > > Hi Jan, > > > > On 6 September 2011 08:03, Jan Algermissen <algermissen1971@...> > wrote: > > > You might find the document at [1] helpful when comparing RESTfulness > of APIs. > > > > > > Jan > > > > > > [1] > http://www.nordsc.com/ext/classification_of_http_based_apis.html#http-type-one > > > > This document is great! > > Thanks! > > > I'm currently working on an API that is > > roughly HTTP-based Type 1 that I'm trying to make more RESTful if I > > can, which is difficult due toHTTP-based Type 1 APIs being the easiest > > to design up front and easier to document and use (at least > > initially). > > > > It would be excellent to have some concrete examples of where the > > benefits of RESTful API evolution were realised and how RESTful > > clients coped with the changes - do you know if any exist? > > > > Unfortunately, no. My experience is that nobody does REST in the sense that > it can show the evolvability benefits (in my Enterprise-IT biased context > this primarily means eliminating the need to sit in endless interface design > workshops or for server-side devs to constantly wonder who and how seriously > depends on the interface they are about to change). > > It is such a radical change not only to the architectural model but also to > the software development process model that I suspect we'll need t be > patient another couple of years. BUt fortunately the number of people that > are working on it is constantly growing. > > In time period shorter than geologic time :-), I've seen an evolution advantage of REST that is more difficult to achieve in a SOAP based world. As a REST API evolves, two kinds of changes are extremely common: * Adding new properties in a resource representation. * Adding new types of resources (with new URIs), and then adding the corresponding "link" cross references to existing representations. If your resource representations are not constrained to a strict schema, you can add these sorts of things without breaking existing clients, and without changing a version number of the API. A client that doesn't understand the new property or link names simply ignores them, and a client that is updated to become aware of them can use them. In a SOAP API, you are typically much more constrained due to the "resource" formats, and the available method calls, being constrained by a WSDL definition of the service that the client is aware of and depends on (often, for example, client library mappings to the API are generated from it). That means you really need a new version of the WSDL (with a new version number) for each change, with corresponding ripple effects on all the clients. Craig McClanahan > Jan > > > > Good work! > > > > Cheers, > > > > Jim > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Hi, I'm new to REST and have a quick query: Most of the literature i've read about RESTful design warns against using any sort of sessions. I've always used sessions to keep a track of whether is user is logged in to an application. What's the best way of achieving this without using a session... Many thanks jim
On Sep 8, 2011, at 4:34 PM, jim.margetts wrote: > Hi, > > I'm new to REST and have a quick query: > > Most of the literature i've read about RESTful design warns against using any sort of sessions. I've always used sessions to keep a track of whether is user is logged in to an application. What's the best way of achieving this without using a session... > HTTP has built in authentication. See for example: http://www.ietf.org/id/draft-ietf-httpbis-p7-auth-16.txt http://stackoverflow.com/questions/tagged/restful-authentication Jan > Many thanks > > jim > >
You need to consider some form of stateless authentication. Here are few options to consider: - HTTP basic authentication is simplest but requires user to directly send credentials unencrypted. This is typically mitigated by switching to SSL and use HTTPS instead of plain HTTP, which is however still vulnerable to man in the middle attacks. Server is not required to store client credentials in plain text, which means client credentials are safe even in case of accidental server data leaks. - HTTP digest authentication is much more robust and safe even over plain HTTP, but requires implementing more complex algorithms. It also doubles the number of requests - each client request needs to be sent twice (see the protocol for explanation why). Again, server is not required to store client credentials in plain text, which means client credentials are safe even in case of accidental server data leaks. - WSSE is somewhere in between the basic and digest. It can be used over plain HTTP, it does not require 2 HTTP requests for every client request, but it does require server to store user credentials in a plain text (unlike the two protocols above) which makes the client credentials potentially vulnerable in case of a successful attack on the server. - you may also want to look at OAuth, which is becoming quite popular and can be used for implementing more advanced authentication and authorization scenarios. HTH, Marek On 09/08/2011 04:34 PM, jim.margetts wrote: > > > Hi, > > I'm new to REST and have a quick query: > > Most of the literature i've read about RESTful design warns against using any sort of sessions. I've always used > sessions to keep a track of whether is user is logged in to an application. What's the best way of achieving this > without using a session... > > Many thanks > > jim > >
URI templates decouple you from fixed identifiers, and couple you to a uri template definition (of which, so far, none has made it to standard status anywhere). The alternative is to use links and forms with server-generated URIs, which couples you to the hypermedia controls in your media type definition. So at the end of the day, it just shifts the coupling somewhere else. I'm not a big fan of templates myself, but I just built a system based on OpenSearch where they're used, and rather successfully, so it's always a question of balance. ________________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Jakob Strauch [jakob.strauch@...] Sent: 28 August 2011 12:53 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Re: Are URI-Templates really coupling clients and server? Thanks! I think you pointed out some clear arguments for something i had only a gut feeling. If i understand this correctly, i could link to an opensearch compatibel resource with an URI Template based on this specification AND the rel="search" attribute specified by [1]? Furthermore, I´m playing around with the HAL specification [2]. I´m asking myself, if i can provide an URI template instead of an URI. Based on your comment, that URI templates are valid URIs and a link´s relation semantic is defined by a specification, i think i can. This would allow linking (highly dynmic) resource collections with a single URI Template without tight coupling. As long as the parameters are part of the specification. [1] http://www.iana.org/assignments/link-relations/link-relations.xml [2] http://stateless.co/hal_specification.html --- In rest-discuss@...m, Jan Algermissen <algermissen1971@...> wrote: > > > On Aug 28, 2011, at 11:00 AM, Jakob Strauch wrote: > > > In different resources on the web, i found different opinions about URI Templating. Some say, they are coupling clients and server (e.g. Erik Wilde). Some propose their usage (e.g. subbu). > > > > I dont see any coupling - it is not said, that an URI Template connot changed either over time. As long as the server communicates the templates - like URIs - in resource representations, where is the problem? > > > > Modern Web browsers support templated URIs, e.g. for search engines. By typing a keyword, e.g. wiki, and a search expression my browser is referring to a specific search page of wikipedia. > > > > Do i miss something here? Some opinions about this issue? > > Hi Jakob, > > URI templates are fine, as long as the specification of the parameters is 'global' (not just defined by the server for the sake of it's own API). > > My preference is to define the parameters as part of the specification of the link relation that is used with the template. Such a link relation specification must include two things: > > 1) It must specifiy that the URI references used are to be interpreted as templates (because a template URI is a valid URI and you cannot tell the difference if you do not know up front that you are given a template URI reference). > > 2) What parameters the client might encounter and what they mean. > > An example of this is the opensearch specification[1]. > > Jan > > > [1] http://www.opensearch.org/Specifications/OpenSearch/1.1#OpenSearch_1.1_parameters > > > > > > > > > ------------------------------------ Yahoo! Groups Links
On Sep 11, 2011, at 1:51 PM, Sebastien Lambla wrote: > URI templates decouple you from fixed identifiers, and couple you to a uri template definition (of which, so far, none has made it to standard status anywhere). > > The alternative is to use links and forms with server-generated URIs, Right. A URI template is equivalent to an HTML GET form - it just looks more 'elegant'. Jan
HTML GET is a form of URI template, and I don't necessarily like those either, the only property I use with those that is useful is that you get cache hits locally without network crossing, which you don't get with a PRG. That's local optimization, depending on your scenario (and search is a very good scenario) then that's useful. -----Original Message----- From: Jan Algermissen [mailto:algermissen1971@...] Sent: 11 September 2011 13:53 To: Sebastien Lambla Cc: Jakob Strauch; rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server? On Sep 11, 2011, at 1:51 PM, Sebastien Lambla wrote: > URI templates decouple you from fixed identifiers, and couple you to a uri template definition (of which, so far, none has made it to standard status anywhere). > > The alternative is to use links and forms with server-generated URIs, Right. A URI template is equivalent to an HTML GET form - it just looks more 'elegant'. Jan
Le 11 sept. 2011 à 14:53, Jan Algermissen a écrit : > > On Sep 11, 2011, at 1:51 PM, Sebastien Lambla wrote: > >> URI templates decouple you from fixed identifiers, and couple you to a uri template definition (of which, so far, none has made it to standard status anywhere). >> >> The alternative is to use links and forms with server-generated URIs, > > Right. > > A URI template is equivalent to an HTML GET form - it just looks more 'elegant'. Hi, In the context of this discussion, I don't see URI templates and HTML forms as equivalent because URI templates allow for a much higher degree of URI evolvability (i.e, structural change over time without breaking the client). Philippe
On Sep 11, 2011, at 4:52 PM, Philippe Mougin wrote: > > In the context of this discussion, I don't see URI templates and HTML forms as equivalent because URI templates allow for a much higher degree of URI evolvability (i.e, structural change over time without breaking the client). Can you provide an example of an evolution that can be done with templates but cannot be done with GET forms? Jan > > Philippe > >
From a functional point of view, querystrings are a form of templates. URI templates as used in things like opensearch have more features, but functionally they are exactly the same thing. -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Philippe Mougin Sent: 11 September 2011 15:52 To: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server? Le 11 sept. 2011 à 14:53, Jan Algermissen a écrit : > > On Sep 11, 2011, at 1:51 PM, Sebastien Lambla wrote: > >> URI templates decouple you from fixed identifiers, and couple you to a uri template definition (of which, so far, none has made it to standard status anywhere). >> >> The alternative is to use links and forms with server-generated URIs, > > Right. > > A URI template is equivalent to an HTML GET form - it just looks more 'elegant'. Hi, In the context of this discussion, I don't see URI templates and HTML forms as equivalent because URI templates allow for a much higher degree of URI evolvability (i.e, structural change over time without breaking the client). Philippe ------------------------------------ Yahoo! Groups Links
Le 11 sept. 2011 à 16:59, Jan Algermissen a écrit : > On Sep 11, 2011, at 4:52 PM, Philippe Mougin wrote: > >> >> In the context of this discussion, I don't see URI templates and HTML forms as equivalent because URI templates allow for a much higher degree of URI evolvability (i.e, structural change over time without breaking the client). > > Can you provide an example of an evolution that can be done with templates but cannot be done with GET forms? http://example.com/pets?name=princess ---> http://example.com/pets/princess Philippe
Le 11 sept. 2011 à 17:06, Sebastien Lambla a écrit : > From a functional point of view, querystrings are a form of templates. URI templates as used in things like opensearch have more features, but functionally they are exactly the same thing. But URI templates can describe a larger range of URIs than HTML forms. This higher expressive power is relevant to the current discussion, which is about coupling between clients and servers. Philippe > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Philippe Mougin > Sent: 11 September 2011 15:52 > To: rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server? > > Le 11 sept. 2011 à 14:53, Jan Algermissen a écrit : > >> >> On Sep 11, 2011, at 1:51 PM, Sebastien Lambla wrote: >> >>> URI templates decouple you from fixed identifiers, and couple you to a uri template definition (of which, so far, none has made it to standard status anywhere). >>> >>> The alternative is to use links and forms with server-generated URIs, >> >> Right. >> >> A URI template is equivalent to an HTML GET form - it just looks more 'elegant'. > > Hi, > > In the context of this discussion, I don't see URI templates and HTML forms as equivalent because URI templates allow for a much higher degree of URI evolvability (i.e, structural change over time without breaking the client). > > Philippe > > ------------------------------------ > > Yahoo! Groups Links > > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
I'm not sure this is relevant in any way. URIs stay opaque in the sense that, from the client perspective building /hello?mykey=myvalue or /hello/{myValue} still results in an opaque identifier, so neither provide better or worse coupling between client and server. It limits the choices the server has in minting certain type of URIs, but that's a server concern, not a client-server one.
-----Original Message-----
From: Philippe Mougin [mailto:pmougin@...]
Sent: 11 September 2011 16:58
To: rest-discuss@yahoogroups.com
Cc: Sebastien Lambla
Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server?
Le 11 sept. 2011 à 17:06, Sebastien Lambla a écrit :
> From a functional point of view, querystrings are a form of templates. URI templates as used in things like opensearch have more features, but functionally they are exactly the same thing.
But URI templates can describe a larger range of URIs than HTML forms. This higher expressive power is relevant to the current discussion, which is about coupling between clients and servers.
Philippe
> -----Original Message-----
> From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Philippe Mougin
> Sent: 11 September 2011 15:52
> To: rest-discuss@yahoogroups.com
> Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server?
>
> Le 11 sept. 2011 à 14:53, Jan Algermissen a écrit :
>
>>
>> On Sep 11, 2011, at 1:51 PM, Sebastien Lambla wrote:
>>
>>> URI templates decouple you from fixed identifiers, and couple you to a uri template definition (of which, so far, none has made it to standard status anywhere).
>>>
>>> The alternative is to use links and forms with server-generated URIs,
>>
>> Right.
>>
>> A URI template is equivalent to an HTML GET form - it just looks more 'elegant'.
>
> Hi,
>
> In the context of this discussion, I don't see URI templates and HTML forms as equivalent because URI templates allow for a much higher degree of URI evolvability (i.e, structural change over time without breaking the client).
>
> Philippe
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
Le 11 sept. 2011 à 17:59, Sebastien Lambla a écrit :
> I'm not sure this is relevant in any way. URIs stay opaque in the sense that, from the client perspective building /hello?mykey=myvalue or /hello/{myValue} still results in an opaque identifier, so neither provide better or worse coupling between client and server. It limits the choices the server has in minting certain type of URIs, but that's a server concern, not a client-server one.
Imagine a server that use an URI template as a hypertext control dynamically communicated to the client. The day the server decides to change its URI structure from /hello?mykey=myvalue to /hello/{myValue}, it can do so without breaking the client because the change can be dynamically communicated to the client in the URI template.
Alternatively, the server can avoid breaking the client by not changing the structure of its URIs (as you noted). It might be acceptable or not, depending on the specific context.
But that whole idea of having servers dynamically instruct clients how to build URIs... isn't it, at least in part, about allowing changing things on the servers without breaking clients? As time pass and server side implementation evolve, what was once an apt URI structure might no longer be.
Philippe
> -----Original Message-----
> From: Philippe Mougin [mailto:pmougin@acm.org]
> Sent: 11 September 2011 16:58
> To: rest-discuss@yahoogroups.com
> Cc: Sebastien Lambla
> Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server?
>
>
> Le 11 sept. 2011 à 17:06, Sebastien Lambla a écrit :
>
>> From a functional point of view, querystrings are a form of templates. URI templates as used in things like opensearch have more features, but functionally they are exactly the same thing.
>
> But URI templates can describe a larger range of URIs than HTML forms. This higher expressive power is relevant to the current discussion, which is about coupling between clients and servers.
>
> Philippe
>
>> -----Original Message-----
>> From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Philippe Mougin
>> Sent: 11 September 2011 15:52
>> To: rest-discuss@yahoogroups.com
>> Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server?
>>
>> Le 11 sept. 2011 à 14:53, Jan Algermissen a écrit :
>>
>>>
>>> On Sep 11, 2011, at 1:51 PM, Sebastien Lambla wrote:
>>>
>>>> URI templates decouple you from fixed identifiers, and couple you to a uri template definition (of which, so far, none has made it to standard status anywhere).
>>>>
>>>> The alternative is to use links and forms with server-generated URIs,
>>>
>>> Right.
>>>
>>> A URI template is equivalent to an HTML GET form - it just looks more 'elegant'.
>>
>> Hi,
>>
>> In the context of this discussion, I don't see URI templates and HTML forms as equivalent because URI templates allow for a much higher degree of URI evolvability (i.e, structural change over time without breaking the client).
>>
>> Philippe
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
You limit server choices in URI assignment, not client-server coupling: the coupling is still on some URI building language (querystring and URI templates), and in either case the client still consider the generated identifier as opaque, its level of coupling is the same.
-----Original Message-----
From: Philippe Mougin [mailto:pmougin@...]
Sent: 11 September 2011 18:01
To: rest-discuss@yahoogroups.com
Cc: Sebastien Lambla
Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server?
Le 11 sept. 2011 à 17:59, Sebastien Lambla a écrit :
> I'm not sure this is relevant in any way. URIs stay opaque in the sense that, from the client perspective building /hello?mykey=myvalue or /hello/{myValue} still results in an opaque identifier, so neither provide better or worse coupling between client and server. It limits the choices the server has in minting certain type of URIs, but that's a server concern, not a client-server one.
Imagine a server that use an URI template as a hypertext control dynamically communicated to the client. The day the server decides to change its URI structure from /hello?mykey=myvalue to /hello/{myValue}, it can do so without breaking the client because the change can be dynamically communicated to the client in the URI template.
Alternatively, the server can avoid breaking the client by not changing the structure of its URIs (as you noted). It might be acceptable or not, depending on the specific context.
But that whole idea of having servers dynamically instruct clients how to build URIs... isn't it, at least in part, about allowing changing things on the servers without breaking clients? As time pass and server side implementation evolve, what was once an apt URI structure might no longer be.
Philippe
> -----Original Message-----
> From: Philippe Mougin [mailto:pmougin@...]
> Sent: 11 September 2011 16:58
> To: rest-discuss@yahoogroups.com
> Cc: Sebastien Lambla
> Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server?
>
>
> Le 11 sept. 2011 à 17:06, Sebastien Lambla a écrit :
>
>> From a functional point of view, querystrings are a form of templates. URI templates as used in things like opensearch have more features, but functionally they are exactly the same thing.
>
> But URI templates can describe a larger range of URIs than HTML forms. This higher expressive power is relevant to the current discussion, which is about coupling between clients and servers.
>
> Philippe
>
>> -----Original Message-----
>> From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Philippe Mougin
>> Sent: 11 September 2011 15:52
>> To: rest-discuss@yahoogroups.com
>> Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server?
>>
>> Le 11 sept. 2011 à 14:53, Jan Algermissen a écrit :
>>
>>>
>>> On Sep 11, 2011, at 1:51 PM, Sebastien Lambla wrote:
>>>
>>>> URI templates decouple you from fixed identifiers, and couple you to a uri template definition (of which, so far, none has made it to standard status anywhere).
>>>>
>>>> The alternative is to use links and forms with server-generated URIs,
>>>
>>> Right.
>>>
>>> A URI template is equivalent to an HTML GET form - it just looks more 'elegant'.
>>
>> Hi,
>>
>> In the context of this discussion, I don't see URI templates and HTML forms as equivalent because URI templates allow for a much higher degree of URI evolvability (i.e, structural change over time without breaking the client).
>>
>> Philippe
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
Le 11 sept. 2011 à 19:12, Sebastien Lambla a écrit :
> You limit server choices in URI assignment, not client-server coupling: the coupling is still on some URI building language (querystring and URI templates), and in either case the client still consider the generated identifier as opaque, its level of coupling is the same.
Sure, but imagine a context where, for some reason, you don't want to limit server choice in URI assignment. Take it as a requirement for a minute. In such a context, where /hello?mykey=myvalue can later evolve to /hello/{myValue}, URIs templates allow for better evolvability (i.e., decoupling) than HTML forms (because at some point your HTML forms won't be able to communicate the new URI structure and then you'll have to switch to something else and break clients on that occasion). Don't they?
Philippe
> -----Original Message-----
> From: Philippe Mougin [mailto:pmougin@...]
> Sent: 11 September 2011 18:01
> To: rest-discuss@yahoogroups.com
> Cc: Sebastien Lambla
> Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server?
>
>
> Le 11 sept. 2011 à 17:59, Sebastien Lambla a écrit :
>
>> I'm not sure this is relevant in any way. URIs stay opaque in the sense that, from the client perspective building /hello?mykey=myvalue or /hello/{myValue} still results in an opaque identifier, so neither provide better or worse coupling between client and server. It limits the choices the server has in minting certain type of URIs, but that's a server concern, not a client-server one.
>
> Imagine a server that use an URI template as a hypertext control dynamically communicated to the client. The day the server decides to change its URI structure from /hello?mykey=myvalue to /hello/{myValue}, it can do so without breaking the client because the change can be dynamically communicated to the client in the URI template.
>
> Alternatively, the server can avoid breaking the client by not changing the structure of its URIs (as you noted). It might be acceptable or not, depending on the specific context.
> But that whole idea of having servers dynamically instruct clients how to build URIs... isn't it, at least in part, about allowing changing things on the servers without breaking clients? As time pass and server side implementation evolve, what was once an apt URI structure might no longer be.
>
> Philippe
>
>> -----Original Message-----
>> From: Philippe Mougin [mailto:pmougin@...]
>> Sent: 11 September 2011 16:58
>> To: rest-discuss@yahoogroups.com
>> Cc: Sebastien Lambla
>> Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server?
>>
>>
>> Le 11 sept. 2011 à 17:06, Sebastien Lambla a écrit :
>>
>>> From a functional point of view, querystrings are a form of templates. URI templates as used in things like opensearch have more features, but functionally they are exactly the same thing.
>>
>> But URI templates can describe a larger range of URIs than HTML forms. This higher expressive power is relevant to the current discussion, which is about coupling between clients and servers.
>>
>> Philippe
>>
>>> -----Original Message-----
>>> From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Philippe Mougin
>>> Sent: 11 September 2011 15:52
>>> To: rest-discuss@yahoogroups.com
>>> Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server?
>>>
>>> Le 11 sept. 2011 à 14:53, Jan Algermissen a écrit :
>>>
>>>>
>>>> On Sep 11, 2011, at 1:51 PM, Sebastien Lambla wrote:
>>>>
>>>>> URI templates decouple you from fixed identifiers, and couple you to a uri template definition (of which, so far, none has made it to standard status anywhere).
>>>>>
>>>>> The alternative is to use links and forms with server-generated URIs,
>>>>
>>>> Right.
>>>>
>>>> A URI template is equivalent to an HTML GET form - it just looks more 'elegant'.
>>>
>>> Hi,
>>>
>>> In the context of this discussion, I don't see URI templates and HTML forms as equivalent because URI templates allow for a much higher degree of URI evolvability (i.e, structural change over time without breaking the client).
>>>
>>> Philippe
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>>
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
I have not made any statement about the evolvability of the server, only on the coupling between client and server and the coupling to unfinished and non-modular specifications in media type definitions.
-----Original Message-----
From: Philippe Mougin [mailto:pmougin@...]
Sent: 11 September 2011 18:41
To: rest-discuss@yahoogroups.com
Cc: Sebastien Lambla
Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server?
Le 11 sept. 2011 à 19:12, Sebastien Lambla a écrit :
> You limit server choices in URI assignment, not client-server coupling: the coupling is still on some URI building language (querystring and URI templates), and in either case the client still consider the generated identifier as opaque, its level of coupling is the same.
Sure, but imagine a context where, for some reason, you don't want to limit server choice in URI assignment. Take it as a requirement for a minute. In such a context, where /hello?mykey=myvalue can later evolve to /hello/{myValue}, URIs templates allow for better evolvability (i.e., decoupling) than HTML forms (because at some point your HTML forms won't be able to communicate the new URI structure and then you'll have to switch to something else and break clients on that occasion). Don't they?
Philippe
> -----Original Message-----
> From: Philippe Mougin [mailto:pmougin@...]
> Sent: 11 September 2011 18:01
> To: rest-discuss@yahoogroups.com
> Cc: Sebastien Lambla
> Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server?
>
>
> Le 11 sept. 2011 à 17:59, Sebastien Lambla a écrit :
>
>> I'm not sure this is relevant in any way. URIs stay opaque in the sense that, from the client perspective building /hello?mykey=myvalue or /hello/{myValue} still results in an opaque identifier, so neither provide better or worse coupling between client and server. It limits the choices the server has in minting certain type of URIs, but that's a server concern, not a client-server one.
>
> Imagine a server that use an URI template as a hypertext control dynamically communicated to the client. The day the server decides to change its URI structure from /hello?mykey=myvalue to /hello/{myValue}, it can do so without breaking the client because the change can be dynamically communicated to the client in the URI template.
>
> Alternatively, the server can avoid breaking the client by not changing the structure of its URIs (as you noted). It might be acceptable or not, depending on the specific context.
> But that whole idea of having servers dynamically instruct clients how to build URIs... isn't it, at least in part, about allowing changing things on the servers without breaking clients? As time pass and server side implementation evolve, what was once an apt URI structure might no longer be.
>
> Philippe
>
>> -----Original Message-----
>> From: Philippe Mougin [mailto:pmougin@...]
>> Sent: 11 September 2011 16:58
>> To: rest-discuss@yahoogroups.com
>> Cc: Sebastien Lambla
>> Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server?
>>
>>
>> Le 11 sept. 2011 à 17:06, Sebastien Lambla a écrit :
>>
>>> From a functional point of view, querystrings are a form of templates. URI templates as used in things like opensearch have more features, but functionally they are exactly the same thing.
>>
>> But URI templates can describe a larger range of URIs than HTML forms. This higher expressive power is relevant to the current discussion, which is about coupling between clients and servers.
>>
>> Philippe
>>
>>> -----Original Message-----
>>> From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Philippe Mougin
>>> Sent: 11 September 2011 15:52
>>> To: rest-discuss@yahoogroups.com
>>> Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server?
>>>
>>> Le 11 sept. 2011 à 14:53, Jan Algermissen a écrit :
>>>
>>>>
>>>> On Sep 11, 2011, at 1:51 PM, Sebastien Lambla wrote:
>>>>
>>>>> URI templates decouple you from fixed identifiers, and couple you to a uri template definition (of which, so far, none has made it to standard status anywhere).
>>>>>
>>>>> The alternative is to use links and forms with server-generated URIs,
>>>>
>>>> Right.
>>>>
>>>> A URI template is equivalent to an HTML GET form - it just looks more 'elegant'.
>>>
>>> Hi,
>>>
>>> In the context of this discussion, I don't see URI templates and HTML forms as equivalent because URI templates allow for a much higher degree of URI evolvability (i.e, structural change over time without breaking the client).
>>>
>>> Philippe
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>>
>>>
>>> ------------------------------------
>>>
>>> Yahoo! Groups Links
>>>
>>>
>>>
>>
>>
>>
>> ------------------------------------
>>
>> Yahoo! Groups Links
>>
>>
>>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
On Sep 11, 2011, at 5:59 PM, Sebastien Lambla wrote:
> I'm not sure this is relevant in any way. URIs stay opaque in the sense that, from the client perspective building /hello?mykey=myvalue or /hello/{myValue} still results in an opaque identifier, so neither provide better or worse coupling between client and server. It limits the choices the server has in minting certain type of URIs, but that's a server concern, not a client-server one.
Yep. +1
Jan
>
> -----Original Message-----
> From: Philippe Mougin [mailto:pmougin@...]
> Sent: 11 September 2011 16:58
> To: rest-discuss@yahoogroups.com
> Cc: Sebastien Lambla
> Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server?
>
> Le 11 sept. 2011 à 17:06, Sebastien Lambla a écrit :
>
> > From a functional point of view, querystrings are a form of templates. URI templates as used in things like opensearch have more features, but functionally they are exactly the same thing.
>
> But URI templates can describe a larger range of URIs than HTML forms. This higher expressive power is relevant to the current discussion, which is about coupling between clients and servers.
>
> Philippe
>
> > -----Original Message-----
> > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Philippe Mougin
> > Sent: 11 September 2011 15:52
> > To: rest-discuss@yahoogroups.com
> > Subject: Re: [rest-discuss] Re: Are URI-Templates really coupling clients and server?
> >
> > Le 11 sept. 2011 à 14:53, Jan Algermissen a écrit :
> >
> >>
> >> On Sep 11, 2011, at 1:51 PM, Sebastien Lambla wrote:
> >>
> >>> URI templates decouple you from fixed identifiers, and couple you to a uri template definition (of which, so far, none has made it to standard status anywhere).
> >>>
> >>> The alternative is to use links and forms with server-generated URIs,
> >>
> >> Right.
> >>
> >> A URI template is equivalent to an HTML GET form - it just looks more 'elegant'.
> >
> > Hi,
> >
> > In the context of this discussion, I don't see URI templates and HTML forms as equivalent because URI templates allow for a much higher degree of URI evolvability (i.e, structural change over time without breaking the client).
> >
> > Philippe
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
>
>
[I've been redirected here from www-talk] Hi Everyone, I'm searching for a correct solution (not violating the HTTP protocol and causing least possible confusion to client) to this problem: I POST a structure with task description to a resource (say /res1) /res1 triggers creation of a resource, which can take a long time and can potentially fail in the end, due problems in task description (user fault), that can not be discovered upon the initial POST. I need a way to communicate this error to the user in case creation fails or resulting resource in case the execution succeeds. Would this be correct? 1.) Initial POST to /res1 returns an URI /res2 2.) Subsequent GET or HEAD of /res2 return 202 until the task finishes 3a.) In case the execution succeeds, a GET or HEAD of /res2 returns 200 with task result as body 3b.) Otherwise make the /res2 go away and start returning 410 (or 404?) and explaining what went wrong in the body? Or is there a better way? Thanks, -- Lubomir Rintel (GoodData), phone: #7715
On Sep 12, 2011, at 9:59 AM, Lubomir Rintel wrote: > [I've been redirected here from www-talk] > > Hi Everyone, > > I'm searching for a correct solution (not violating the HTTP protocol > and causing least possible confusion to client) to this problem: > > I POST a structure with task description to a resource (say /res1) /res1 > triggers creation of a resource, which can take a long time and can > potentially fail in the end, due problems in task description (user > fault), that can not be discovered upon the initial POST. I need a way > to communicate this error to the user in case creation fails or > resulting resource in case the execution succeeds. > > Would this be correct? > > 1.) Initial POST to /res1 returns an URI /res2 This returns 202 and a polling URI in the Location header. > 2.) Subsequent GET or HEAD of /res2 return 202 until the task finishes No, it returns 200 and status information in the body (==> so you need to mint media type for this to express these semantics) Body will tell client about (maybe progress), success or failure and the result URI (HTTP does not say that the 202 Location URI is the created final resource. That also is in the body. (The definition of a general media type for this pattern would be possible - need not be specific to your application) Jan > 3a.) In case the execution succeeds, a GET or HEAD of /res2 returns 200 > with task result as body > 3b.) Otherwise make the /res2 go away and start returning 410 (or 404?) > and explaining what went wrong in the body? > > Or is there a better way? > > Thanks, > -- > Lubomir Rintel (GoodData), phone: #7715 > >
On Mon, 2011-09-12 at 15:44 +0200, Jan Algermissen wrote: > On Sep 12, 2011, at 9:59 AM, Lubomir Rintel wrote: > > > [I've been redirected here from www-talk] > > > > Hi Everyone, > > > > I'm searching for a correct solution (not violating the HTTP protocol > > and causing least possible confusion to client) to this problem: > > > > I POST a structure with task description to a resource (say /res1) /res1 > > triggers creation of a resource, which can take a long time and can > > potentially fail in the end, due problems in task description (user > > fault), that can not be discovered upon the initial POST. I need a way > > to communicate this error to the user in case creation fails or > > resulting resource in case the execution succeeds. > > > > Would this be correct? > > > > 1.) Initial POST to /res1 returns an URI /res2 > > This returns 202 and a polling URI in the Location header. My understanding was that rfc2616 does not define the semantics of Location header for 202 responses. Section 14.30 indicates that it can be used for 3xx or 201 responses to either redirect or indicate location of newly created resource (supposedly respectively). None of these is the case for a 202 response. Also, a 202 response should contain pointer to status or estimate, but there's no mention (in section 10.2.3) about where to get the actual resource content. Therefore I assumed that once the request is fulfilled the response body should be available at the same location that returned the 202 response before. Is this assumption wrong? > > 2.) Subsequent GET or HEAD of /res2 return 202 until the task > finishes > > No, it returns 200 and status information in the body (==> so you need > to mint media type for this to express these semantics) > > Body will tell client about (maybe progress), success or failure and > the result URI (HTTP does not say that the 202 Location URI is the > created final resource. That also is in the body. > > (The definition of a general media type for this pattern would be > possible - need not be specific to your application) Well, the original idea was to make it possible to have a client that implements little beyond what's required by HTTP that would be able to wait for a creation of a resource (possibly providing a status monitor and indication of progress on the server, but not actually requiring the client to grok it). If we defined an extra content type, it is likely that the client would not negotiate it. Wouldn't it be possible to go without it somehow? > > Jan > > > > > 3a.) In case the execution succeeds, a GET or HEAD of /res2 returns 200 > > with task result as body > > 3b.) Otherwise make the /res2 go away and start returning 410 (or 404?) > > and explaining what went wrong in the body? > > > > Or is there a better way? > > > > Thanks, > > -- > > Lubomir Rintel (GoodData), phone: #7715 > > > > > -- Lubomir Rintel (GoodData), phone: #7715
On Sep 12, 2011, at 4:27 PM, Lubomir Rintel wrote: > On Mon, 2011-09-12 at 15:44 +0200, Jan Algermissen wrote: > > On Sep 12, 2011, at 9:59 AM, Lubomir Rintel wrote: > > > > > [I've been redirected here from www-talk] > > > > > > Hi Everyone, > > > > > > I'm searching for a correct solution (not violating the HTTP protocol > > > and causing least possible confusion to client) to this problem: > > > > > > I POST a structure with task description to a resource (say /res1) /res1 > > > triggers creation of a resource, which can take a long time and can > > > potentially fail in the end, due problems in task description (user > > > fault), that can not be discovered upon the initial POST. I need a way > > > to communicate this error to the user in case creation fails or > > > resulting resource in case the execution succeeds. > > > > > > Would this be correct? > > > > > > 1.) Initial POST to /res1 returns an URI /res2 > > > > This returns 202 and a polling URI in the Location header. > > My understanding was that rfc2616 does not define the semantics of > Location header for 202 responses. Checked - you are right. Sorry. One day I'll have to memorize 2616 :-) So that would need to go in the body, too. Unless Location's semantics by itself can be interpreted like I thought. jan > Section 14.30 indicates that it can > be used for 3xx or 201 responses to either redirect or indicate location > of newly created resource (supposedly respectively). None of these is > the case for a 202 response. > > Also, a 202 response should contain pointer to status or estimate, but > there's no mention (in section 10.2.3) about where to get the actual > resource content. Therefore I assumed that once the request is fulfilled > the response body should be available at the same location that returned > the 202 response before. Is this assumption wrong? > > > > 2.) Subsequent GET or HEAD of /res2 return 202 until the task > > finishes > > > > No, it returns 200 and status information in the body (==> so you need > > to mint media type for this to express these semantics) > > > > Body will tell client about (maybe progress), success or failure and > > the result URI (HTTP does not say that the 202 Location URI is the > > created final resource. That also is in the body. > > > > (The definition of a general media type for this pattern would be > > possible - need not be specific to your application) > > Well, the original idea was to make it possible to have a client that > implements little beyond what's required by HTTP that would be able to > wait for a creation of a resource (possibly providing a status monitor > and indication of progress on the server, but not actually requiring the > client to grok it). If we defined an extra content type, it is likely > that the client would not negotiate it. Wouldn't it be possible to go > without it somehow? > > > > Jan > > > > > > > > > 3a.) In case the execution succeeds, a GET or HEAD of /res2 returns 200 > > > with task result as body > > > 3b.) Otherwise make the /res2 go away and start returning 410 (or 404?) > > > and explaining what went wrong in the body? > > > > > > Or is there a better way? > > > > > > Thanks, > > > -- > > > Lubomir Rintel (GoodData), phone: #7715 > > > > > > > > > > -- > Lubomir Rintel (GoodData), phone: #7715 > >
On Sep 12, 2011, at 4:27 PM, Lubomir Rintel wrote: > On Mon, 2011-09-12 at 15:44 +0200, Jan Algermissen wrote: >> On Sep 12, 2011, at 9:59 AM, Lubomir Rintel wrote: >> >>> [I've been redirected here from www-talk] >>> >>> Hi Everyone, >>> >>> I'm searching for a correct solution (not violating the HTTP protocol >>> and causing least possible confusion to client) to this problem: >>> >>> I POST a structure with task description to a resource (say /res1) /res1 >>> triggers creation of a resource, which can take a long time and can >>> potentially fail in the end, due problems in task description (user >>> fault), that can not be discovered upon the initial POST. I need a way >>> to communicate this error to the user in case creation fails or >>> resulting resource in case the execution succeeds. Actually, check out this: http://lists.w3.org/Archives/Public/ietf-http-wg/2011JulSep/0343.html Jan >>> >>> Would this be correct? >>> >>> 1.) Initial POST to /res1 returns an URI /res2 >> >> This returns 202 and a polling URI in the Location header. > > My understanding was that rfc2616 does not define the semantics of > Location header for 202 responses. Section 14.30 indicates that it can > be used for 3xx or 201 responses to either redirect or indicate location > of newly created resource (supposedly respectively). None of these is > the case for a 202 response. > > Also, a 202 response should contain pointer to status or estimate, but > there's no mention (in section 10.2.3) about where to get the actual > resource content. Therefore I assumed that once the request is fulfilled > the response body should be available at the same location that returned > the 202 response before. Is this assumption wrong? > >>> 2.) Subsequent GET or HEAD of /res2 return 202 until the task >> finishes >> >> No, it returns 200 and status information in the body (==> so you need >> to mint media type for this to express these semantics) >> >> Body will tell client about (maybe progress), success or failure and >> the result URI (HTTP does not say that the 202 Location URI is the >> created final resource. That also is in the body. >> >> (The definition of a general media type for this pattern would be >> possible - need not be specific to your application) > > Well, the original idea was to make it possible to have a client that > implements little beyond what's required by HTTP that would be able to > wait for a creation of a resource (possibly providing a status monitor > and indication of progress on the server, but not actually requiring the > client to grok it). If we defined an extra content type, it is likely > that the client would not negotiate it. Wouldn't it be possible to go > without it somehow? >> >> Jan >> >> >> >>> 3a.) In case the execution succeeds, a GET or HEAD of /res2 returns 200 >>> with task result as body >>> 3b.) Otherwise make the /res2 go away and start returning 410 (or 404?) >>> and explaining what went wrong in the body? >>> >>> Or is there a better way? >>> >>> Thanks, >>> -- >>> Lubomir Rintel (GoodData), phone: #7715 >>> >>> >> > > -- > Lubomir Rintel (GoodData), phone: #7715 >
So generally how do you get rid of session state? Two choices: carry it with the message or turn it into resource state. Our solution was a mix of both. In OpenStack we have an authN component called keystone. When you log in, keystone gives you a token, a list of links, and from there you are expected to put this token as a header in your requests (assisted via code-on-demand). When the request arrives at a server with the token, we call a keystone resource to validate the token. This resource can be cached on either side of the SSL tunnel. --- In rest-discuss@yahoogroups.com, "jim.margetts" <jim.margetts@...> wrote: > Most of the literature i've read about RESTful design warns against using any sort of > sessions. I've always used sessions to keep a track of whether is user is logged in to an > application. What's the best way of achieving this without using a session...
On Tue, Sep 13, 2011 at 3:30 PM, bryan_w_taylor <bryan_w_taylor@...> wrote: > In OpenStack we have an authN component called keystone. When you log in, keystone > gives you a token, a list of links, and from there you are expected to put this token as a > header in your requests (assisted via code-on-demand). When the request arrives at > a server with the token, we call a keystone resource to validate the token. This resource > can be cached on either side of the SSL tunnel. +1 to this. In a few implementations I've done this has also been done through an extra parameter (as opposed to header) as security_token=[hashed_token] for simpler implementations, but I do prefer forwarding the token in forms that RESTful clients pick up and passes forward (just look out for clients losing their tokens, forcing an extra authentication). I usually have a service (called ServiceStation for giggles) that handles validation and granting of security tokens across the network. Regards, Alex -- Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps --- http://shelter.nu/blog/ ---------------------------------------------- ------------------ http://www.google.com/profiles/alexander.johannesen ---
First many thanks to all who answered my previous qry: http://tech.groups.yahoo.com/group/rest-discuss/message/17741 all your comments were most helpful. Thanks! I'm currently trying to develop a basic invoicing system but i'm having a bit of difficulty figuring out how best to represent invoices. An invoice is represented in the database as follows: invoices: invoice_no (pk), date, customer_id (fk) An invoice has several rows or lines that i have called invoice items. These are represented in the database as follows: invoice_items: invoice_item_id (pk), cost, qty, details, invoice_no (fk) The invoices resource is available at: /invoices GET /invoices gives a list of all the invoices on the system and you can: POST /invoices to add a new invoice. GET /invoices/1 gives a view of invoice 1 and all related invoice_items PUT /invoices/1 to update it or DELETE /invoices/1 to delete it (the db cascade deletes all related invoice_items) My first thought was that i should make invoice items availabe at the following: /invoices/1/invoice_items /invoices/1/invoice_items/[invoice_item_id] however, its seems a lot easier and it simplifies my routing a bit if i just make invoices items available via /invoice_items/[invoice_id] I suppose it mean i can't access the collection of invoice_items for a specific invoice directly via any url but i already do that when i use GET /invoice/[invoice_id] I hope that makes sense? I wondered if anyone had any comments / suggestions / criticisms of any of this? any help very much appreciated thanks jim
On Wed, Sep 14, 2011 at 10:18 AM, jim.margetts <jim.margetts@...>wrote: > First many thanks to all who answered my previous qry: > http://tech.groups.yahoo.com/group/rest-discuss/message/17741 all your > comments were most helpful. Thanks! > > I'm currently trying to develop a basic invoicing system but i'm having a > bit of difficulty figuring out how best to represent invoices. > > An invoice is represented in the database as follows: > > invoices: invoice_no (pk), date, customer_id (fk) > > An invoice has several rows or lines that i have called invoice items. > These are represented in the database as follows: > > invoice_items: invoice_item_id (pk), cost, qty, details, invoice_no (fk) > > The invoices resource is available at: > > /invoices > > GET /invoices gives a list of all the invoices on the system > > and you can: > > POST /invoices to add a new invoice. > > GET /invoices/1 gives a view of invoice 1 and all related invoice_items > PUT /invoices/1 to update it or > DELETE /invoices/1 to delete it (the db cascade deletes all related > invoice_items) > > My first thought was that i should make invoice items availabe at the > following: > > /invoices/1/invoice_items > /invoices/1/invoice_items/[invoice_item_id] > > however, its seems a lot easier and it simplifies my routing a bit if i > just make invoices items available via > > /invoice_items/[invoice_id] > > I suppose it mean i can't access the collection of invoice_items for a > specific invoice directly via any url but i already do that when i use GET > /invoice/[invoice_id] > Sure you can, do both i.e: use /invoices/1/invoice_items to serve the collection of items, and use /invoice_items/[invoice_id] to serve the individual items You haven't mentioned what stack you are using, but if it's worth its salt then achieving the above should be trivial. Cheers, Mike
On Wed, Sep 14, 2011 at 2:18 AM, jim.margetts <jim.margetts@...>wrote: > ** > > > First many thanks to all who answered my previous qry: > http://tech.groups.yahoo.com/group/rest-discuss/message/17741 all your > comments were most helpful. Thanks! > > I'm currently trying to develop a basic invoicing system but i'm having a > bit of difficulty figuring out how best to represent invoices. > > An invoice is represented in the database as follows: > > invoices: invoice_no (pk), date, customer_id (fk) > > An invoice has several rows or lines that i have called invoice items. > These are represented in the database as follows: > > invoice_items: invoice_item_id (pk), cost, qty, details, invoice_no (fk) > > The invoices resource is available at: > > /invoices > > GET /invoices gives a list of all the invoices on the system > > and you can: > > POST /invoices to add a new invoice. > > GET /invoices/1 gives a view of invoice 1 and all related invoice_items > PUT /invoices/1 to update it or > DELETE /invoices/1 to delete it (the db cascade deletes all related > invoice_items) > > My first thought was that i should make invoice items availabe at the > following: > > /invoices/1/invoice_items > /invoices/1/invoice_items/[invoice_item_id] > > however, its seems a lot easier and it simplifies my routing a bit if i > just make invoices items available via > > /invoice_items/[invoice_id] > > I suppose it mean i can't access the collection of invoice_items for a > specific invoice directly via any url but i already do that when i use GET > /invoice/[invoice_id] > > I hope that makes sense? > > I wondered if anyone had any comments / suggestions / criticisms of any of > this? > > any help very much appreciated > Couple of points. Take a note of the first thing you mentioned here. You immediately brought up the database representation of the invoice. Simply put, that implies you're starting from the wrong end of the stack for your design. Database rows are not (necessarily) resources. It's very easy to fall in to the trap of trying to push your database model to the resource model, but it's really the wrong way to go as the two are not necessarily that tightly bound. It's a mindset thing. Much like folks like to start with "what do my urls look like", when in fact the URLs are an end result of the design, but not necessarily the goal or point of the design. Clearly URLs and resources are more tightly related (they are the names after all), and they're more tightly related than the relationship between a resource and how it's persisted on the back end. But the key here is to think of resources at a high level, and also important, consider the media type(s) that you want to use. Then model your domain on top of those media types. Your debate over /invoice/invoiceitems and simply /invoiceitems should be driven by use case as you try to apply them to your model more than anything else. For example I would simply posit the question of whether you need fine grained access to the invoice items at all, rather than simply focusing on the invoice itself as a whole. Folks have a tendency to want to expose their data model directly to the API, but that's not necessarily appropriate most of the time. It's just something we've done for so long we naturally drift that way. So, start at the front end and push back in to the system towards the persistence model rather than the other way around and see what ends up shaking out. Be conservative, if you don't need it, don't do it. REST resources tend to be rather coarse, so larger payloads aren't necessarily bad. Consider your /invoice and /invoice/invoiceitems idiom and how that affects caching, for example. Making a change to /invoice/1/invoiceitems/1 doesn't invalidate the /invoice/1 URI, even tho ideally that change likely should. So that's another advantage of coarse resources. No hard and fast rules, obviously. But the approach is important in order to embrace the idiom on its own without trying to let the non-REST idioms that you're using to implement it poke through in the interface. Regards, Will Hartung (willh@...)
Will, Many thanks for your comments. > For example I would simply posit the question of whether you need fine > grained access to the invoice items at all, rather than simply focusing on > the invoice itself as a whole. I think youre right - The use really just needs to understand and interact with invoices as a whole. I'd started to try and think of it this way but i kept running into problems implementing it... i'm currently just using html The problem i'm having is in trying to return a html form (or series of forms)to allow the user to fully edit an invoice - ie. add, delete, update invoice_items and the invoice itself. Below I have a series of forms that allow the user to effectively PUT /invoices/[invoice_no] to update the invoice details DELETE /invoices/[invoice_no] to delete the invoice POST /invoice_items to add an invoice_item PUT /invoice_items/[invoice_item_id] to update an invoice item DELETE /invoice_items/[invoice_item_id] to delete an invoice item really i want to do it like you suggest and just expose /invoices but i can't see how i can add / edit / delete items without using javascript to dynamically change the html and that seems like cheating... or i thought it seemed like i was hacking due to bad design.. although maybe its because i'm trying to build client logic at the server side? again any comments very welcome, ta jim <form method='post'> <table> <tr> <th>invoice_no</th> <td><input type='text' name='invoice_no' value='1250' /></td> </tr> <tr> <th>invoice_date</th> <td><input type='text' name='invoice_date' value='2011-09-14' /></td> </tr> </table> <input type='hidden' name='_method' value='PUT' /> <input type='submit' value='Update' /> </form> <form method='POST'> <input type='hidden' name='_method' value='DELETE' /> <input type='submit' value='Delete' /> </form> <table> <tr> <th>Date</th> <th>Description</th> <th>Details</th> <th>Qty</th> <th>Cost</th> </tr> <tr> <form action='/invoice_items/27' method='POST'> <td><input type='text' name='details' value='test1' /></td> <td><input type='text' name='qty' value='1' /></td> <td><input type='text' name='cost' value='10.00' /></td> <td><input type='submit' value='Update' /></td> <input type='hidden' name='_method' value='PUT' /> </form> <form action='/invoice_items/27' method='POST'> <td><input type='submit' value='Delete' /></td> <input type='hidden' name='_method' value='DELETE' /> </form> </tr> <tr> <form action='/invoice_items/28' method='POST'> <td><input type='text' name='details' value='test2' /></td> <td><input type='text' name='qty' value='1' /></td> <td><input type='text' name='cost' value='10.00' /></td> <td><input type='submit' value='Update' /></td> <input type='hidden' name='_method' value='PUT' /> </form> <form action='/invoice_items/28' method='POST'> <td><input type='submit' value='Delete' /></td> <input type='hidden' name='_method' value='DELETE' /> </form> </tr> <form method='POST' action='/invoice_items'> <tr> <td><input type="text" value="" name="details"></td> <td><input type="text" value="1" name="qty"></td> <td><input type="text" value="" name="cost"></td> <td><input type='submit' value='Add' /></td> <input type='hidden' name='invoice_no' value='1250' /> </tr> </form>
On Wed, Sep 14, 2011 at 8:17 PM, jim.margetts <jim@...> wrote: > The problem i'm having is in trying to return a html form (or series of forms)to allow the user to fully edit an invoice - ie. add, delete, update invoice_items and the invoice itself. > > Below I have a series of forms that allow the user to effectively > > PUT /invoices/[invoice_no] to update the invoice details > DELETE /invoices/[invoice_no] to delete the invoice > > POST /invoice_items to add an invoice_item > PUT /invoice_items/[invoice_item_id] to update an invoice item > DELETE /invoice_items/[invoice_item_id] to delete an invoice item > > really i want to do it like you suggest and just expose /invoices but i can't see how i can add / edit / delete items without using javascript to dynamically change the html and that seems like cheating... or i thought it seemed like i was hacking due to bad design.. although maybe its because i'm trying to build client logic at the server side? Just process something like: POST /invoices with invoice representations in application/x-www-form-urlencoded format to add a new invoice. POST /invoices/[invoice_id] with invoice representations in application/x-www-form-urlencoded format to update an invoice. POST /invoices/[invoice_id] with invoice representations in application/x-www-form-urlencoded format to delete an invoice. And likewise for invoice items. It's good that you also handle PUT and DELETE, but there's no native support for those in HTML forms. You might ultimately find some use of them if you progressively enhance your HTML interface using XMLHttpRequest, as that does support PUT and DELETE. You can also expose them to automated clients via the Allow HTTP header: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.7 -- Benjamin Hawkes-Lewis
Hi, human targeted Web sites somtimes (or usually?) adjust the representations they send based on their knowledge about the abilities of the user agents (e.g. IE might get different stuff than FireFox). I think using the UserAgent header to negotiate representation features is a nice solution for non-human targeted situations, too. Essentially this means to negotiate incompatible media types based on Accept and the compatible variations in a given media type based on UserAgent. This might include the addition of certain Link headers. Would love to hear comments on that topic. Jan
Hi, I am timing the RTT (round trip time) of a POST to an internet based RESTlet. Two files are involved - a small one (entity body of 231 bytes) and a larger one (entity body of 9896 bytes). The RTT of the bigger file (142ms average) is consistently faster than the smaller one (250ms average)?? I am perplexed by this... I suspect that there is some TCP optimisation going on but don't know what.... Any help much appreciated. Thanks, Sean.
Hi! This is my first post on the list.. I'm iplementing a RESTful web service for simple CRUD stuff etc.. I would like to use some Oauth provider (for instance Facebook SSO) for authenticating my user. On a simple web app this is easy, just login, validate the oauth token and store info in a session cookie. However for a (for instance) mobile client it isn't that trivial IMO. Should I implement sessions (standard cookie or some home baked header stuff) or validate the oauth token on every request? Or do you even have a complete different approach for my problem? I don't feel like making all clients generate passwords for themself, if they want to use facebook login. BR, Niklas
On 2011-09-15 19:54, Jan Algermissen wrote: > Hi, > > human targeted Web sites somtimes (or usually?) adjust the > representations they send based on their knowledge about the abilities > of the user agents (e.g. IE might get different stuff than FireFox). > > I think using the UserAgent header to negotiate representation features > is a nice solution for non-human targeted situations, too. Essentially > this means to negotiate incompatible media types based on Accept and the > compatible variations in a given media type based on UserAgent. This > might include the addition of certain Link headers. > > Would love to hear comments on that topic. > ... Avoid. Avoid. Avoid. It's unreliable (the UA might be lying). It's hard to parse properly. What's the use case, except for finding out whether the UA can do application/xhtml+xml? Best regards, Julian
On Sep 15, 2011, at 10:43 PM, Julian Reschke wrote: > On 2011-09-15 19:54, Jan Algermissen wrote: >> Hi, >> >> human targeted Web sites somtimes (or usually?) adjust the >> representations they send based on their knowledge about the abilities >> of the user agents (e.g. IE might get different stuff than FireFox). >> >> I think using the UserAgent header to negotiate representation features >> is a nice solution for non-human targeted situations, too. Essentially >> this means to negotiate incompatible media types based on Accept and the >> compatible variations in a given media type based on UserAgent. This >> might include the addition of certain Link headers. >> >> Would love to hear comments on that topic. >> ... > > Avoid. Avoid. Avoid. :-) > > It's unreliable (the UA might be lying). It's hard to parse properly. Yes. I am thinking more in terms of a (reasonably) controllable space. E.g. LAN or between HTTP-based clients I sold to customers and now want to upgrade my service because of my newer, more shiny clients. I would be more or less myself that decides about the UserAgent header value. > > What's the use case, except for finding out whether the UA can do application/xhtml+xml? Two cases: 1) I am a service an can send some application/procurement. The format of that type has gone through a set of compatible changes[1] and I can send the whole feature set. How do I decide what to send? All of it? (I'd go with that because I think it is the reasonable default: should I put link X in there - yes I should because I can.) 2) What if I have two different ways of putting in the same kind of hypermedia control? Wouldn't it be a good idea to look whether the client is one of the older ones or a new one and decide based on that (hence UserAgent)? (Yeah, you could say that having multiple non-orthogonal choices is a failure in media type evolution in the first place) Of course I see your point (and I also know this is a slippery slope towards over specification) Jan [1] Incompatible changes would yield new types, e.g. application/procurement-new > > Best regards, Julian
On Thu, Sep 15, 2011 at 7:54 PM, Jan Algermissen <jan.algermissen@... > wrote: > ** > > I think using the UserAgent header to negotiate representation features is > a nice solution for non-human targeted situations, too. Essentially this > means to negotiate incompatible media types based on Accept and the > compatible variations in a given media type based on UserAgent. This might > include the addition of certain Link headers. > I agree with the "Avoid avoid" here. It sounds a lot like out-of-band knowledge, and in the opposite direction than is usual. "I know that V3.1.3 of FooClient doesn't use the xyzzy link relation, so I'll just save bandwidth by not sending it". The server could end up with a large amount of knowledge. See wurfl for an example of this going bad. Why not stick to Accept (which is that it's for) and use media type parameters? Accept: application/procurement;hypermediacontrols=new;q=1, application/procurement;hypermediacontrols=old;q=0.3 "new" and "old" are obviously bad choices, but you get the idea. That way the media type can express the different "variants" within compatibility, and clients can express their capabilities in a header designed for conneg. Evolvability FTW! -- -mogsie-
On Sep 16, 2011, at 12:20 AM, Erik Mogensen wrote: > On Thu, Sep 15, 2011 at 7:54 PM, Jan Algermissen <jan.algermissen@...> wrote: > I think using the UserAgent header to negotiate representation features is a nice solution for non-human targeted situations, too. Essentially this means to negotiate incompatible media types based on Accept and the compatible variations in a given media type based on UserAgent. This might include the addition of certain Link headers. > > I agree with the "Avoid avoid" here. It sounds a lot like out-of-band knowledge, and in the opposite direction than is usual. "I know that V3.1.3 of FooClient doesn't use the xyzzy link relation, so I'll just save bandwidth by not sending it". The server could end up with a large amount of knowledge. See wurfl for an example of this going bad. > > Why not stick to Accept (which is that it's for) and use media type parameters? > > Accept: application/procurement;hypermediacontrols=new;q=1, application/procurement;hypermediacontrols=old;q=0.3 Been there, done that :-) Too enteprisey for my taste. Leads to packaging up feature sets into version numbers. I'd always let the Accept express the general capability of the client. (Which I read as: "If you hand me application/atom I am pretty sure I can fulfill my user's intent from there") Versioning makes the whole elegance go away :-) I was thinking more in terms of 'hints' or 'best effort'. Sending mobile devices different content (incl. bu way of a redirect) seems ok. So why not "direct that old legacy product we sold 5 years ago to that old (set of) servers we keep around for these cases"? My original issue, BTW, was routing requests in a load balancer to direct clients that need a new feature to those services that have the new feature installed and are already up and running. In general: situations where you have scaled to N services behind one IP that run different versions of a service (e.g. because they are being upgraded one by one while keeping the site in operation). Jan > > "new" and "old" are obviously bad choices, but you get the idea. That way the media type can express the different "variants" within compatibility, and clients can express their capabilities in a header designed for conneg. Evolvability FTW! > -- > -mogsie-
Seems like "soap"-creep to me ... a la WS-addressing :) One thing that doesn't feel right to me is the fact that the client is driving control flow. Also reminds of a Subbu's post on Media Types, Plumbing and Democracy, in the issues that it brings up. I do like the parametrized media types but its not widely supported; (atleast the last time I checked) Regards, Dilip Krishnan dilip.krishnan@... On Sep 15, 2011, at 5:46 PM, Jan Algermissen wrote: > > On Sep 16, 2011, at 12:20 AM, Erik Mogensen wrote: > >> On Thu, Sep 15, 2011 at 7:54 PM, Jan Algermissen <jan.algermissen@...m> wrote: >> I think using the UserAgent header to negotiate representation features is a nice solution for non-human targeted situations, too. Essentially this means to negotiate incompatible media types based on Accept and the compatible variations in a given media type based on UserAgent. This might include the addition of certain Link headers. >> >> I agree with the "Avoid avoid" here. It sounds a lot like out-of-band knowledge, and in the opposite direction than is usual. "I know that V3.1.3 of FooClient doesn't use the xyzzy link relation, so I'll just save bandwidth by not sending it". The server could end up with a large amount of knowledge. See wurfl for an example of this going bad. >> >> Why not stick to Accept (which is that it's for) and use media type parameters? >> >> Accept: application/procurement;hypermediacontrols=new;q=1, application/procurement;hypermediacontrols=old;q=0.3 > > Been there, done that :-) Too enteprisey for my taste. Leads to packaging up feature sets into version numbers. I'd always let the Accept express the general capability of the client. (Which I read as: "If you hand me application/atom I am pretty sure I can fulfill my user's intent from there") Versioning makes the whole elegance go away :-) > > I was thinking more in terms of 'hints' or 'best effort'. Sending mobile devices different content (incl. bu way of a redirect) seems ok. So why not "direct that old legacy product we sold 5 years ago to that old (set of) servers we keep around for these cases"? > > My original issue, BTW, was routing requests in a load balancer to direct clients that need a new feature to those services that have the new feature installed and are already up and running. In general: situations where you have scaled to N services behind one IP that run different versions of a service (e.g. because they are being upgraded one by one while keeping the site in operation). > > Jan > >> >> "new" and "old" are obviously bad choices, but you get the idea. That way the media type can express the different "variants" within compatibility, and clients can express their capabilities in a header designed for conneg. Evolvability FTW! > > > >> -- >> -mogsie- > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Sep 16, 2011, at 5:00 AM, Dilip Krishnan wrote: > Seems like "soap"-creep to me ... a la WS-addressing :) Why in particular? > > > One thing that doesn't feel right to me is the fact that the client is driving control flow. Is that any different from redirecting a mobile UA to a dedicated server? .... I just remembered the Accept_Features header, which is somehow related: http://www.ietf.org/rfc/rfc2295.txt Jan > Also reminds of a Subbu's post on Media Types, Plumbing and Democracy, in the issues that it brings up. I do like the parametrized media types but its not widely supported; (atleast the last time I checked) > > Regards, > Dilip Krishnan > dilip.krishnan@... > > > > On Sep 15, 2011, at 5:46 PM, Jan Algermissen wrote: > >> >> On Sep 16, 2011, at 12:20 AM, Erik Mogensen wrote: >> >>> On Thu, Sep 15, 2011 at 7:54 PM, Jan Algermissen <jan.algermissen@nordsc.com> wrote: >>> I think using the UserAgent header to negotiate representation features is a nice solution for non-human targeted situations, too. Essentially this means to negotiate incompatible media types based on Accept and the compatible variations in a given media type based on UserAgent. This might include the addition of certain Link headers. >>> >>> I agree with the "Avoid avoid" here. It sounds a lot like out-of-band knowledge, and in the opposite direction than is usual. "I know that V3.1.3 of FooClient doesn't use the xyzzy link relation, so I'll just save bandwidth by not sending it". The server could end up with a large amount of knowledge. See wurfl for an example of this going bad. >>> >>> Why not stick to Accept (which is that it's for) and use media type parameters? >>> >>> Accept: application/procurement;hypermediacontrols=new;q=1, application/procurement;hypermediacontrols=old;q=0.3 >> >> Been there, done that :-) Too enteprisey for my taste. Leads to packaging up feature sets into version numbers. I'd always let the Accept express the general capability of the client. (Which I read as: "If you hand me application/atom I am pretty sure I can fulfill my user's intent from there") Versioning makes the whole elegance go away :-) >> >> I was thinking more in terms of 'hints' or 'best effort'. Sending mobile devices different content (incl. bu way of a redirect) seems ok. So why not "direct that old legacy product we sold 5 years ago to that old (set of) servers we keep around for these cases"? >> >> My original issue, BTW, was routing requests in a load balancer to direct clients that need a new feature to those services that have the new feature installed and are already up and running. In general: situations where you have scaled to N services behind one IP that run different versions of a service (e.g. because they are being upgraded one by one while keeping the site in operation). >> >> Jan >> >>> >>> "new" and "old" are obviously bad choices, but you get the idea. That way the media type can express the different "variants" within compatibility, and clients can express their capabilities in a header designed for conneg. Evolvability FTW! >> >> >> >>> -- >>> -mogsie- >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> > > >
Jan: Not sure this is the same thing, but I have code that handles "exceptions" for conneg match results and those exceptions are based on UA reporting. here are some real-world examples: WebKit conneg will always pick any XML variant over any HTML variant offered. IOW, when given that chance, WebKit conneg results in "give me XML" Turns out WebKit does not _render_ the XML (just shows a blank page). I add an exception to make sure WebKit browsers (not XMLHttpRequest) get HTML if it's available. Mirosoft Excel conneg will favor HTML (assuming a table) over CSV. I add an exception so that MS-Excel clients get CSV if it is available. There are (I think) some others, but those are ones that come up quite often and how i deal with them. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Fri, Sep 16, 2011 at 08:30, Jan Algermissen <jan.algermissen@...> wrote: > > On Sep 16, 2011, at 5:00 AM, Dilip Krishnan wrote: > >> Seems like "soap"-creep to me ... a la WS-addressing :) > > Why in particular? > >> >> >> One thing that doesn't feel right to me is the fact that the client is driving control flow. > > Is that any different from redirecting a mobile UA to a dedicated server? > > .... > > I just remembered the Accept_Features header, which is somehow related: http://www.ietf.org/rfc/rfc2295.txt > > Jan > > > >> Also reminds of a Subbu's post on Media Types, Plumbing and Democracy, in the issues that it brings up. I do like the parametrized media types but its not widely supported; (atleast the last time I checked) >> >> Regards, >> Dilip Krishnan >> dilip.krishnan@... >> >> >> >> On Sep 15, 2011, at 5:46 PM, Jan Algermissen wrote: >> >>> >>> On Sep 16, 2011, at 12:20 AM, Erik Mogensen wrote: >>> >>>> On Thu, Sep 15, 2011 at 7:54 PM, Jan Algermissen <jan.algermissen@...> wrote: >>>> I think using the UserAgent header to negotiate representation features is a nice solution for non-human targeted situations, too. Essentially this means to negotiate incompatible media types based on Accept and the compatible variations in a given media type based on UserAgent. This might include the addition of certain Link headers. >>>> >>>> I agree with the "Avoid avoid" here. It sounds a lot like out-of-band knowledge, and in the opposite direction than is usual. "I know that V3.1.3 of FooClient doesn't use the xyzzy link relation, so I'll just save bandwidth by not sending it". The server could end up with a large amount of knowledge. See wurfl for an example of this going bad. >>>> >>>> Why not stick to Accept (which is that it's for) and use media type parameters? >>>> >>>> Accept: application/procurement;hypermediacontrols=new;q=1, application/procurement;hypermediacontrols=old;q=0.3 >>> >>> Been there, done that :-) Too enteprisey for my taste. Leads to packaging up feature sets into version numbers. I'd always let the Accept express the general capability of the client. (Which I read as: "If you hand me application/atom I am pretty sure I can fulfill my user's intent from there") Versioning makes the whole elegance go away :-) >>> >>> I was thinking more in terms of 'hints' or 'best effort'. Sending mobile devices different content (incl. bu way of a redirect) seems ok. So why not "direct that old legacy product we sold 5 years ago to that old (set of) servers we keep around for these cases"? >>> >>> My original issue, BTW, was routing requests in a load balancer to direct clients that need a new feature to those services that have the new feature installed and are already up and running. In general: situations where you have scaled to N services behind one IP that run different versions of a service (e.g. because they are being upgraded one by one while keeping the site in operation). >>> >>> Jan >>> >>>> >>>> "new" and "old" are obviously bad choices, but you get the idea. That way the media type can express the different "variants" within compatibility, and clients can express their capabilities in a header designed for conneg. Evolvability FTW! >>> >>> >>> >>>> -- >>>> -mogsie- >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >> >> >> > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
I am just guessing, but cannot the result be influenced by a longer server-side processing required to produce the shorter response, or something alike? Marek On 09/15/2011 09:04 PM, Sean Kennedy wrote: > > > Hi, > I am timing the RTT (round trip time) of a POST to an internet based RESTlet. Two files are involved - a small one > (entity body of 231 bytes) and a larger one (entity body of 9896 bytes). The RTT of the bigger file (142ms average) is > consistently faster than the smaller one (250ms average)?? I am perplexed by this... I suspect that there is some TCP > optimisation going on but don't know what.... > > Any help much appreciated. > > Thanks, > Sean. >
TCP has a specified packet size (normally roughly 1500 bytes). Until that packet size or a timeout is not reached, the packet won’t be send. Thus, the bigger packet is sent immediately because the packet size is met and so there’s no need to wait for the time out. -- Markus Lanthaler @markuslanthaler From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Marek Potociar Sent: Friday, September 16, 2011 3:26 PM Cc: Rest Discussion Group Subject: Re: [rest-discuss] Round Trip Times faster with bigger files?? I am just guessing, but cannot the result be influenced by a longer server-side processing required to produce the shorter response, or something alike? Marek On 09/15/2011 09:04 PM, Sean Kennedy wrote: > > > Hi, > I am timing the RTT (round trip time) of a POST to an internet based RESTlet. Two files are involved - a small one > (entity body of 231 bytes) and a larger one (entity body of 9896 bytes). The RTT of the bigger file (142ms average) is > consistently faster than the smaller one (250ms average)?? I am perplexed by this... I suspect that there is some TCP > optimisation going on but don't know what.... > > Any help much appreciated. > > Thanks, > Sean. >
While thinking about documentation for the API we're working on one of our team suggested building a REST API to document the REST API. Resources in the documentation API would correspond to media types and contain data on the methods and properties available in those media types. Has anyone seen this done before? How did it work out? Are there a set of media types for describing RESTful media types that we can reuse? The goal here would be to reuse the API infrastructure to serve the documentation and to allow client side formatting of the documentation (rather than building heavyweight WS* formats describing formats for validation etc.). Cheers, Jim
On Sep 20, 2011, at 3:22 PM, Jim Purbrick wrote: > While thinking about documentation for the API we're working on Are you asking about 'service documentation'[1] or 'media type documentation'? Jan [1] Which makes your service unRESTful immediately. E.g see <http://stackoverflow.com/questions/7355084/publishing-documenting-spring-rest-api/7482941#7482941> > one of > our team suggested building a REST API to document the REST API. > Resources in the documentation API would correspond to media types and > contain data on the methods and properties available in those media > types. > > Has anyone seen this done before? How did it work out? Are there a set > of media types for describing RESTful media types that we can reuse? > The goal here would be to reuse the API infrastructure to serve the > documentation and to allow client side formatting of the documentation > (rather than building heavyweight WS* formats describing formats for > validation etc.). > > Cheers, > > Jim >
On Sep 20, 2011, at 3:47 PM, Jim Purbrick wrote: >> On Sep 20, 2011, at 3:22 PM, Jim Purbrick wrote: >> >>> While thinking about documentation for the API we're working on >> >> Are you asking about 'service documentation'[1] or 'media type documentation'? > > Media type documentation. Great [:-)]. I am afraid there is necessarily much prose involved. Though some aspect of a media type is that it defines a superset of structural abstractions of server side state ( e.g. Feed entries have a title, feeds have entries,..) and a set of hypermedia semantics ( 'entry2 is edit-resource of entry1). These could be formalized - but I doubt that we are anywhere close to that , yet. @Mike? I recall you did some work on this, lately? Jan > The documentation API would just be a > collection of descriptions of media types. > > Cheers, > > Jim
On Sep 20, 2011, at 6:41 AM, Jan Algermissen wrote: > [1] Which makes your service unRESTful immediately. Very pedantic, and not a helpful guidance. Subbu
On Sep 20, 2011, at 7:58 PM, Subbu Allamaraju wrote: > > On Sep 20, 2011, at 6:41 AM, Jan Algermissen wrote: > >> [1] Which makes your service unRESTful immediately. > > Very pedantic, and not a helpful guidance. 'pedantic' has such a negative connotation, doesn't it? 'not helpful' does, too. .... which, one might say is in turn not very helpful :-) (And: what makes a guidance helpful, BTW?) As always: IMHO, sticking to the principles laid out by the thesis helps learning. Doing away with a principle just because it *appears* to be impractical can keep you from learning what you need to learn to see the usefulness of the principle in the first place. I see no advantage whatsoever in using API descriptions instead of media type definitions. The amount of work is the same - the difference is who owns the contract. The service owner or some global (including my-enterprise-global) institution. In addition, being not RESTful might be perfectly fine for your scenario - but it still helps to understand what that means and what the tradeoffs are. Jan > > Subbu > > > ------------------------------------ > > Yahoo! Groups Links > > >
Understood, but I would still classify this as pedantic guidance not based on systemic qualities. Please see [1]. Subbu [1] http://www.subbu.org/blog/2011/08/measuring-rest-2 On Sep 20, 2011, at 11:19 AM, Jan Algermissen wrote: > > On Sep 20, 2011, at 7:58 PM, Subbu Allamaraju wrote: > >> >> On Sep 20, 2011, at 6:41 AM, Jan Algermissen wrote: >> >>> [1] Which makes your service unRESTful immediately. >> >> Very pedantic, and not a helpful guidance. > > 'pedantic' has such a negative connotation, doesn't it? 'not helpful' does, too. .... which, one might say is in turn not very helpful :-) > (And: what makes a guidance helpful, BTW?) > > > > As always: IMHO, sticking to the principles laid out by the thesis helps learning. Doing away with a principle just because it *appears* to be impractical can keep you from learning what you need to learn to see the usefulness of the principle in the first place. > > I see no advantage whatsoever in using API descriptions instead of media type definitions. The amount of work is the same - the difference is who owns the contract. The service owner or some global (including my-enterprise-global) institution. > > In addition, being not RESTful might be perfectly fine for your scenario - but it still helps to understand what that means and what the tradeoffs are. > > Jan > > > >> >> Subbu >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >
On Sep 20, 2011, at 8:30 PM, Subbu Allamaraju wrote: > Understood, but I would still classify this as pedantic guidance not based on systemic qualities. Please see [1]. Ok - so you are saying there is 'no' value in insisting in doing REST when you do not yet understand how to judge whether you actually need to be RESTful? I'd agree with that - to me, the true value of Roy's thesis is that it guides you how to make such decisions in the first place[1] I think the key issue of doing REST behind the firewall is whether the evolvability you gain with REST (say over my HTTP Type I) will pay off or not. And how it will pay off exactly. FWIW, inside enterprise boundaries, any advantages will be seen not so much in long term (decades) evolvability but much more in more flexible deployment scenarios in the context of constant, short notice, changes or a more grassroots and experimental style of exploring new features. Jan [1] And there is still surprisingly few material exploring that aspect of the software architecture profession > > Subbu > > [1] http://www.subbu.org/blog/2011/08/measuring-rest-2 > > On Sep 20, 2011, at 11:19 AM, Jan Algermissen wrote: > > > > > On Sep 20, 2011, at 7:58 PM, Subbu Allamaraju wrote: > > > >> > >> On Sep 20, 2011, at 6:41 AM, Jan Algermissen wrote: > >> > >>> [1] Which makes your service unRESTful immediately. > >> > >> Very pedantic, and not a helpful guidance. > > > > 'pedantic' has such a negative connotation, doesn't it? 'not helpful' does, too. .... which, one might say is in turn not very helpful :-) > > (And: what makes a guidance helpful, BTW?) > > > > > > > > As always: IMHO, sticking to the principles laid out by the thesis helps learning. Doing away with a principle just because it *appears* to be impractical can keep you from learning what you need to learn to see the usefulness of the principle in the first place. > > > > I see no advantage whatsoever in using API descriptions instead of media type definitions. The amount of work is the same - the difference is who owns the contract. The service owner or some global (including my-enterprise-global) institution. > > > > In addition, being not RESTful might be perfectly fine for your scenario - but it still helps to understand what that means and what the tradeoffs are. > > > > Jan > > > > > > > >> > >> Subbu > >> > >> > >> ------------------------------------ > >> > >> Yahoo! Groups Links > >> > >> > >> > > > >
Nope - we can't arbitrarily use qualities like "evolvability" without applying them to a context (such as the conditions in which clients are servers are working). The guidance becomes pedantic when you don't contextualize the qualities - consequently you won't be able to measure the outcome. For instance "evolvability" could mean "writing a new client app in a week" - there may be several ways to get to that measurement. The task is to pick one. The key thing to learn is not "how to do 100% REST" but how to build apps that meet certain **measurable** qualities. Roy's work in Chapter 2 gives a good starting point, but the list of qualities that apply to a given app may be different. Subbu On Sep 20, 2011, at 12:01 PM, Jan Algermissen wrote: > > On Sep 20, 2011, at 8:30 PM, Subbu Allamaraju wrote: > >> Understood, but I would still classify this as pedantic guidance not based on systemic qualities. Please see [1]. > > Ok - so you are saying there is 'no' value in insisting in doing REST when you do not yet understand how to judge whether you actually need to be RESTful? > > I'd agree with that - to me, the true value of Roy's thesis is that it guides you how to make such decisions in the first place[1] > > > I think the key issue of doing REST behind the firewall is whether the evolvability you gain with REST (say over my HTTP Type I) will pay off or not. And how it will pay off exactly. > > FWIW, inside enterprise boundaries, any advantages will be seen not so much in long term (decades) evolvability but much more in more flexible deployment scenarios in the context of constant, short notice, changes or a more grassroots and experimental style of exploring new features. > > Jan > > [1] And there is still surprisingly few material exploring that aspect of the software architecture profession > > > >> >> Subbu >> >> [1] http://www.subbu.org/blog/2011/08/measuring-rest-2 >> >> On Sep 20, 2011, at 11:19 AM, Jan Algermissen wrote: >> >>> >>> On Sep 20, 2011, at 7:58 PM, Subbu Allamaraju wrote: >>> >>>> >>>> On Sep 20, 2011, at 6:41 AM, Jan Algermissen wrote: >>>> >>>>> [1] Which makes your service unRESTful immediately. >>>> >>>> Very pedantic, and not a helpful guidance. >>> >>> 'pedantic' has such a negative connotation, doesn't it? 'not helpful' does, too. .... which, one might say is in turn not very helpful :-) >>> (And: what makes a guidance helpful, BTW?) >>> >>> >>> >>> As always: IMHO, sticking to the principles laid out by the thesis helps learning. Doing away with a principle just because it *appears* to be impractical can keep you from learning what you need to learn to see the usefulness of the principle in the first place. >>> >>> I see no advantage whatsoever in using API descriptions instead of media type definitions. The amount of work is the same - the difference is who owns the contract. The service owner or some global (including my-enterprise-global) institution. >>> >>> In addition, being not RESTful might be perfectly fine for your scenario - but it still helps to understand what that means and what the tradeoffs are. >>> >>> Jan >>> >>> >>> >>>> >>>> Subbu >>>> >>>> >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>> >> >> >
On Sep 20, 2011, at 9:16 PM, Subbu Allamaraju wrote: > The key thing to learn is not "how to do 100% REST" but how to build apps that meet certain **measurable** qualities. Personally, the key thing for me has always been "how to do 100% REST behind the firewall" and what are the implications of doing that. How does my mental model of the systems we are building, extending and maintaining(!) change if I leverage 100% REST[1]. And in the end: does what I learned apply in a practical, beneficial way. IOW, does my profession benefit from viewing enterprise IT as having the same desired system properties as the Web does. And yes, I found that it mostly does[2] and I found that the profession benefits. Hence, my quest is for 100% REST. Jan [1] As opposed to relaxing the constraints because I cannot fit them to my *existing* mental model. [2] And where it does not, it is well worth considering rolling your own RPC instead of applying HTTP unRESTfully for the sake of applying HTTP.
On Sep 20, 2011, at 12:36 PM, Jan Algermissen wrote: > Hence, my quest is for 100% REST. Sure. If "100% REST" is the quality you're after, please pursue - no one can object to that. Subbu
Hi,
I am trying to learn about REST but I was recently asked
what ensures scalability when we use REST principles. It did not seem
obvious to me but the answer was caching. I didn't get what the direct
link is between REST and scalability and what caching has to do with
this. I used to think the caching part is associated with HTTP.
Are there any ideas about this ?
Thanks,
Mohan
Mohan: First, Fielding's dissertation describes key system properties that affect scalability: - 2.3.2 Scalability [1] - 2.3.4.3 Customizability [2] - 2.3.5 Visibility [3] Chapter three does a very good job of identifying various existing network architecture styles and calls out several that affect scalability in both positive and negative ways. It is worth reviewing that chapter specifically looking for styles that mention scalability. The work also identifies a number of elements of the REST style are aimed at affecting scalability. - 4.1.4 Internet Scale [4] - 5.1.2 Client-Server [5] - 5.1.3 Stateless [6] - 5.1.4 Cache [7] - 5.1.6 Layered System [8] - 5.2.1 Data Elements [9] - 5.2.2 Connectors [10] - 5.3.1 Process View [11] - 5.3.3 Data View [12] Fielding also mentions a few cases where scalability can be adversely affected by architectural decisions: - 5.2.1.1 Resources and Resource Identifiers [13] - 6.2.5 REST Mismatches in URI [14] - 6.5.2 HTTP is no RPC [15] These should give you some pointers to various aspect of arch styles that address/affect scalability. Mike [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_2_3_2 [2] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_2_3_4_3 [3] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_2_5_3 [4] http://www.ics.uci.edu/~fielding/pubs/dissertation/web_arch_domain.htm#sec_4_1_4 [5] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_2 [6] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_3 [7] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_4 [8] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_1_6 [9] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1 [10] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_2 [11] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_3_1 [12] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_3_3 [13] http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1_1 [14] http://www.ics.uci.edu/~fielding/pubs/dissertation/evaluation.htm#sec_6_2_5 [15] http://www.ics.uci.edu/~fielding/pubs/dissertation/evaluation.htm#sec_6_5_2 mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Sat, Sep 24, 2011 at 13:13, Mohan Radhakrishnan <radhakrishnan.mohan@...> wrote: > Hi, > I am trying to learn about REST but I was recently asked > what ensures scalability when we use REST principles. It did not seem > obvious to me but the answer was caching. I didn't get what the direct > link is between REST and scalability and what caching has to do with > this. I used to think the caching part is associated with HTTP. > > Are there any ideas about this ? > > Thanks, > Mohan > > > ------------------------------------ > > Yahoo! Groups Links > > > >
In addition to Mike's excelent list of pointers... On Sep 24, 2011, at 7:13 PM, Mohan Radhakrishnan wrote: > Hi, > I am trying to learn about REST but I was recently asked > what ensures scalability when we use REST principles. RES's statelessness constraint is what induces scalability into a system. In practical terms: when no interaction state (think 'session') is stored ion the server, it does not matter which host the client connects to next. Hence, you can throw more boxes in to scale horizontally. Jan > It did not seem > obvious to me but the answer was caching. I didn't get what the direct > link is between REST and scalability and what caching has to do with > this. I used to think the caching part is associated with HTTP. > > Are there any ideas about this ? > > Thanks, > Mohan >
... or doesn't it matter? The consensus would seem to be to for the former (use cases + view models). On the other hand, all representations have a media type, the point of which is to allow the client and server to evolve independently. That being the case, why does it matter what is being represented? Thanks in advance Dan
Hi Folks, I'm looking for a Java library which can evaluate a URI against a given URI template. What would you recommend? Basically I need to check whether a URI, matches with a given URI template and extract the variable values etc. The URI templates web page lists [1] as an option for Java, but this doesn't seem to be a complete implementation. Thanks, Hiranya [1] - http://www.metanotion.net/software/urlmapper/
For JSON representations (where the intent is to indicate in-band to a client how to process that representation) what's is current thinking on: a) using a custom media type (application/vnd.mydomain+json) vs b) application/json;profile=http://mydomain/profiles/someprofile , ie as per [1] Thx Dan [1] http://buzzword.org.uk/2009/draft-inkster-profile-parameter-00.html
Dan: To better support evolability over time, REST-style implementations rely on the media type as the only "shared understanding" between client and server. Clients do not rely on a list of procedures (RPC), an object-graph (OO), or a list of fixed URIs, etc. in order to make requests and process results. Clients are "bound" not to the object model, view, etc, but to the media type and the hypermedia elements (links and forms, etc.) within that media type definition. This binding to the hypermedia controls within a media type means changes on the server in the RPC list, object model, URIs used to expose functionality, etc. will have no "breaking effect" on the client (since the client only cares about the hypermedia controls themselves). And that means the two parties and independently evolve over time (use new procedure lists, add/remove object models, etc.) w/o running the risk of "breaking" each other (as long as it is the hypermedia controls that are used as "shared understanding"). For this reason, what is represented by a response *does* matter, but not in ways traditionally understood by "local" programming models (RPC, OO, etc.). What is represented is that _state_ of the app, not the programming style (OO, etc.) of the server implementation. Hopefully that makes some sense<g>. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Mon, Sep 26, 2011 at 02:57, danhaywood@... <dan@...> wrote: > ... or doesn't it matter? > > The consensus would seem to be to for the former (use cases + view models). On the other hand, all representations have a media type, the point of which is to allow the client and server to evolve independently. That being the case, why does it matter what is being represented? > > Thanks in advance > Dan > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Sep 27, 2011, at 10:39 PM, mike amundsen wrote: > Dan: > > To better support evolability over time, REST-style implementations > rely on the media type as the only "shared understanding" between > client and server. Clients do not rely on a list of procedures (RPC), > an object-graph (OO), or a list of fixed URIs, etc. in order to make > requests and process results. Clients are "bound" not to the object > model, view, etc, but to the media type and the hypermedia elements > (links and forms, etc.) within that media type definition. > > This binding to the hypermedia controls within a media type means > changes on the server in the RPC list, object model, URIs used to > expose functionality, etc. will have no "breaking effect" on the > client (since the client only cares about the hypermedia controls > themselves). And that means the two parties and independently evolve > over time (use new procedure lists, add/remove object models, etc.) > w/o running the risk of "breaking" each other (as long as it is the > hypermedia controls that are used as "shared understanding"). > > For this reason, what is represented by a response *does* matter, but > not in ways traditionally understood by "local" programming models > (RPC, OO, etc.). What is represented is that _state_ of the app, not > the programming style (OO, etc.) of the server implementation. > > Hopefully that makes some sense<g>. Yes, it does. +1 Jan > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > On Mon, Sep 26, 2011 at 02:57, danhaywood@... > <dan@...> wrote: > > ... or doesn't it matter? > > > > The consensus would seem to be to for the former (use cases + view models). On the other hand, all representations have a media type, the point of which is to allow the client and server to evolve independently. That being the case, why does it matter what is being represented? > > > > Thanks in advance > > Dan > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > >
Another +1. The question of view models / entities is an implementation detail that is not related to REST or HTTP. Practically some folks do return models on the server which are transformed into a representation. For those cases I recommend using DTOs rather than the domain model because it removes any client coupling. The potential for coupling exists in that case because you are returning some sort of model. The alternative is no model and simply execute server logic to directly create a representation. Sent from my Windows Phone ------------------------------ From: mike amundsen Sent: 9/27/2011 1:39 PM To: danhaywood@... Cc: rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Should a RESTful API expose use cases & view models rather than entities? Dan: To better support evolability over time, REST-style implementations rely on the media type as the only "shared understanding" between client and server. Clients do not rely on a list of procedures (RPC), an object-graph (OO), or a list of fixed URIs, etc. in order to make requests and process results. Clients are "bound" not to the object model, view, etc, but to the media type and the hypermedia elements (links and forms, etc.) within that media type definition. This binding to the hypermedia controls within a media type means changes on the server in the RPC list, object model, URIs used to expose functionality, etc. will have no "breaking effect" on the client (since the client only cares about the hypermedia controls themselves). And that means the two parties and independently evolve over time (use new procedure lists, add/remove object models, etc.) w/o running the risk of "breaking" each other (as long as it is the hypermedia controls that are used as "shared understanding"). For this reason, what is represented by a response *does* matter, but not in ways traditionally understood by "local" programming models (RPC, OO, etc.). What is represented is that _state_ of the app, not the programming style (OO, etc.) of the server implementation. Hopefully that makes some sense<g>. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Mon, Sep 26, 2011 at 02:57, danhaywood@... <dan@haywood-associates.co.uk> wrote: > ... or doesn't it matter? > > The consensus would seem to be to for the former (use cases + view models). On the other hand, all representations have a media type, the point of which is to allow the client and server to evolve independently. That being the case, why does it matter what is being represented? > > Thanks in advance > Dan > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
there is one inside restlet.org check the API @ http://www.restlet.org/documentation/2.0/jse/api/ org.restlet.routing.Template is your friend regards, -marc= On 26-09-11 16:02, hiranya911 wrote: > Hi Folks, > > I'm looking for a Java library which can evaluate a URI against a given URI template. What would you recommend? Basically I need to check whether a URI, matches with a given URI template and extract the variable values etc. > > The URI templates web page lists [1] as an option for Java, but this doesn't seem to be a complete implementation. > > Thanks, > Hiranya > > [1] - http://www.metanotion.net/software/urlmapper/ > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Hi Mike, Thanks for the response, makes sense. So let me ask, given in the server that I *do* have a domain object graph of entities, and given I've invented a mechanism to represent those entities using hypermedia controls (eg represent associations between objects as links between their corresponding representations, allow arbitrary operations on objects to be invoked via forms), is there a problem with this? Your response only seems to require these hypermedia controls, so I can't see that there'd be any objection. Yet there are articles (eg Rickard Oberg's javalobby article [1]) that seem to argue otherwise? Dan [1] http://java.dzone.com/articles/domain-model-rest-anti-pattern On 27/09/2011 21:39, mike amundsen wrote: > To better support evolability over time, REST-style implementations > rely on the media type as the only "shared understanding" between > client and server. Clients do not rely on a list of procedures (RPC), > an object-graph (OO), or a list of fixed URIs, etc. in order to make > requests and process results. Clients are "bound" not to the object > model, view, etc, but to the media type and the hypermedia elements > (links and forms, etc.) within that media type definition. > > This binding to the hypermedia controls within a media type means > changes on the server in the RPC list, object model, URIs used to > expose functionality, etc. will have no "breaking effect" on the > client (since the client only cares about the hypermedia controls > themselves). And that means the two parties and independently evolve > over time (use new procedure lists, add/remove object models, etc.) > w/o running the risk of "breaking" each other (as long as it is the > hypermedia controls that are used as "shared understanding"). > > For this reason, what is represented by a response *does* matter, but > not in ways traditionally understood by "local" programming models > (RPC, OO, etc.). What is represented is that _state_ of the app, not > the programming style (OO, etc.) of the server implementation. > > Hopefully that makes some sense<g>. > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > > > > On Mon, Sep 26, 2011 at 02:57, danhaywood@... > <dan@...> wrote: >> ... or doesn't it matter? >> >> The consensus would seem to be to for the former (use cases + view models). On the other hand, all representations have a media type, the point of which is to allow the client and server to evolve independently. That being the case, why does it matter what is being represented? >> >> Thanks in advance >> Dan >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> -- Dan Haywood consultant, mentor, developer, author agile, ddd, oo, java, .net, sybase MA (Oxon), MBCS, CITP, CEng mailto:dan@... phone: +44 (0)7961 144286 skype: danhaywood twitter: http://twitter.com/dkhaywood blog: http://danhaywood.com web: http://www.haywood-associates.co.uk linkedin: http://uk.linkedin.com/in/dkhaywood open source: http://incubator.apache.org/isis book: http://pragprog.com/titles/dhnako sybase: http://sybtraining.co.uk
A couple of things I keep in mind when designing REST APIs:
* For CRUD-based functionality, it is not uncommon for there to be a
relatively 1:1 mapping
between available CRUD operations and corresponding web service endpoints
(using
appropriate HTTP verbs, of course).
* For more complex workflows, like a shopping cart, the REST API should be
defined
as a state machine (from the client's viewpoint), which might be totally
divorced from
the internal functionality on the server.
Rickard's biggest complaint in the article you reference is that many/most
APIs that claim to be RESTful do not actually obey the hypermedia (a.k.a.
HATEOAS) constraint, and include URIs for the client to use for initiating
state changes. In the shopping cart case, for example, the representation
returned to the user should include the current state of the cart (to obey
the statelessness constraint), *and* a URI for the client to use for
initiating a checkout operation.
Even in a simple CRUD application, the notion of hypermedia links is useful.
As one example, we (Jive Software) have an API that supports documents that
can be stored in various containers, and the documents themselves can have
read-only or read/write permissions for a particular user. When you
retrieve a document object from our API, you'll get, among other things, a
JSON "resources" element with a "self" sub-element, like this:
{
...
"resources" : {
"self" : {
"ref" : "http://example.com/documents/1234",
"allowed" : [ "GET", "PUT", "DELETE" ]
}
...
}
...
}
The fact that "PUT" and "DELETE" are included identifies the fact that the
requesting user is entitled to update or delete this document (a user who
had only read access would see only the "GET" verb). Further, a client
doesn't need to know anything about how URIs are composed -- they just look
up links based on the resources key and treat the URI as an opaque string.
As the "..." implies, we offer links to a lot of resources related to the
document (such as a way to retrieve comments about it, or to "like" it or
"share" it, as well as the HTML representation of this document for use in a
browser) in the resources element. The primary coupling between the client
and the server, for a particular representation, is that the client needs to
understand the resource keys it needs to perform its own functionality.
But, a well-behaved client should also ignore any resource keys it does not
understand. Changes in the set of resource keys available, or even in the
actual URIs, are totally transparent to the client, leading to one of the
other benefits of REST -- you can change the representations, and even the
URI structure of your app, without breaking old clients.
Obeying REST constraints is an investment that pays off in evolvability of
your application. This isn't necessarily valuable in every single scenario
(even in our app's own HTML/JS/CSS UI, we allow the UI team to design
completely purpose-driven REST-ish APIs that are not published to external
clients, and evolve them in backwards-incompatible manners because they
control both the front end and the back end), but when it is appropriate,
you gain the benefits pretty much from the first time you do a substantial
evolution of the API, and again every time after that.
Craig McClanahan
On Wed, Sep 28, 2011 at 12:01 AM, Dan Haywood
<dan@haywood-associates.co.uk>wrote:
> **
>
>
> Hi Mike,
> Thanks for the response, makes sense.
>
> So let me ask, given in the server that I *do* have a domain object
> graph of entities, and given I've invented a mechanism to represent
> those entities using hypermedia controls (eg represent associations
> between objects as links between their corresponding representations,
> allow arbitrary operations on objects to be invoked via forms), is there
> a problem with this?
>
> Your response only seems to require these hypermedia controls, so I
> can't see that there'd be any objection. Yet there are articles (eg
> Rickard Oberg's javalobby article [1]) that seem to argue otherwise?
>
> Dan
>
> [1] http://java.dzone.com/articles/domain-model-rest-anti-pattern
>
>
> On 27/09/2011 21:39, mike amundsen wrote:
> > To better support evolability over time, REST-style implementations
> > rely on the media type as the only "shared understanding" between
> > client and server. Clients do not rely on a list of procedures (RPC),
> > an object-graph (OO), or a list of fixed URIs, etc. in order to make
> > requests and process results. Clients are "bound" not to the object
> > model, view, etc, but to the media type and the hypermedia elements
> > (links and forms, etc.) within that media type definition.
> >
> > This binding to the hypermedia controls within a media type means
> > changes on the server in the RPC list, object model, URIs used to
> > expose functionality, etc. will have no "breaking effect" on the
> > client (since the client only cares about the hypermedia controls
> > themselves). And that means the two parties and independently evolve
> > over time (use new procedure lists, add/remove object models, etc.)
> > w/o running the risk of "breaking" each other (as long as it is the
> > hypermedia controls that are used as "shared understanding").
> >
> > For this reason, what is represented by a response *does* matter, but
> > not in ways traditionally understood by "local" programming models
> > (RPC, OO, etc.). What is represented is that _state_ of the app, not
> > the programming style (OO, etc.) of the server implementation.
> >
> > Hopefully that makes some sense<g>.
> >
> > mca
> > http://amundsen.com/blog/
> > http://twitter.com@mamund
> > http://mamund.com/foaf.rdf#me
> >
> >
> >
> >
> >
> > On Mon, Sep 26, 2011 at 02:57, danhaywood@...
> > <dan@haywood-associates.co.uk> wrote:
> >> ... or doesn't it matter?
> >>
> >> The consensus would seem to be to for the former (use cases + view
> models). On the other hand, all representations have a media type, the point
> of which is to allow the client and server to evolve independently. That
> being the case, why does it matter what is being represented?
> >>
> >> Thanks in advance
> >> Dan
> >>
> >>
> >>
> >> ------------------------------------
> >>
> >> Yahoo! Groups Links
> >>
> >>
> >>
> >>
>
> --
> Dan Haywood
> consultant, mentor, developer, author
> agile, ddd, oo, java, .net, sybase
> MA (Oxon), MBCS, CITP, CEng
> mailto:dan@...
> phone: +44 (0)7961 144286
> skype: danhaywood
> twitter: http://twitter.com/dkhaywood
> blog: http://danhaywood.com
> web: http://www.haywood-associates.co.uk
> linkedin: http://uk.linkedin.com/in/dkhaywood
> open source: http://incubator.apache.org/isis
> book: http://pragprog.com/titles/dhnako
> sybase: http://sybtraining.co.uk
>
>
>
Apache Abdera also has a template class. See http://abdera.apache.org/docs/api/org/apache/abdera/i18n/templates/package-summary.html -- Erlend On Wed, Sep 28, 2011 at 8:16 AM, Marc Portier <mpo@...> wrote: > ** > > > there is one inside restlet.org > > check the API @ http://www.restlet.org/documentation/2.0/jse/api/ > org.restlet.routing.Template is your friend > > regards, > -marc= > > > On 26-09-11 16:02, hiranya911 wrote: > > Hi Folks, > > > > I'm looking for a Java library which can evaluate a URI against a given > URI template. What would you recommend? Basically I need to check whether a > URI, matches with a given URI template and extract the variable values etc. > > > > The URI templates web page lists [1] as an option for Java, but this > doesn't seem to be a complete implementation. > > > > Thanks, > > Hiranya > > > > [1] - http://www.metanotion.net/software/urlmapper/ > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > >
On Mon, Sep 26, 2011 at 4:02 PM, hiranya911 <hiranya911@...> wrote: > ** > > I'm looking for a Java library which can evaluate a URI against a given URI > template. What would you recommend? Basically I need to check whether a URI, > matches with a given URI template and extract the variable values etc. > > Please be aware that the URI template specification has been written with the opposite process in mind: The draft describes "the process for expanding a URI Template into a URI reference." This may cause you problems like the need for disambiguation, since a single URI may match several templates, with no way of knowing which template was used to create the URI in question. IMHO a URI template should be used to _mint_ URIs, not parse them... -- -mogsie-
On 28/09/2011 06:43, Glenn Block wrote:
> Another +1. The question of view models / entities is an implementation
> detail that is not related to REST or HTTP. Practically some folks do
> return models on the server which are transformed into a representation.
> For those cases I recommend using DTOs rather than the domain model
> because it removes any client coupling.
But don't DTOs also constitute a client coupling, specifically to the
parameters required to complete a particular use case?
In other words, if the use case evolves so that it now requires another
input ("would you like a pastry with your coffee, sir?") then the client
will still break, won't it?
>
> The potential for coupling exists in that case because you are returning
> some sort of model. The alternative is no model and simply execute
> server logic to directly create a representation.
>
Isn't another alternative to return a model that is a metamodel, that
fully describes the model being represented.
I'm not necessarily advocating this is how clients are implemented; I
just feel that unless you move up to the metamodel then there's always
going to be a semantic coupling of some sort (irrespective of whether
the representation is of raw domain entities or is of use cases/view
models)?
Thoughts?
Dan
On 28/09/2011 08:52, Craig McClanahan wrote: > A couple of things I keep in mind when designing REST APIs: > > * For CRUD-based functionality, it is not uncommon for there to be a > relatively 1:1 mapping > between available CRUD operations and corresponding web service > endpoints (using > appropriate HTTP verbs, of course). > > * For more complex workflows, like a shopping cart, the REST API should > be defined > as a state machine (from the client's viewpoint), which might be > totally divorced from > the internal functionality on the server. Can you explain why a more complex workflow should be modelled in this way? Why does the client's viewpoint need to be divorced from the state of the entities? What does it buy me (and what are the trade-offs)? > > Rickard's biggest complaint in the article you reference is that > many/most APIs that claim to be RESTful do not actually obey the > hypermedia (a.k.a. HATEOAS) constraint, and include URIs for the client > to use for initiating state changes. In the shopping cart case, for > example, the representation returned to the user should include the > current state of the cart (to obey the statelessness constraint), > *and* a URI for the client to use for initiating a checkout operation. That article didn't seem to make any compelling reason to go with use cases, though. The only thing I read was that figuring out which links to expose (from an authorization viewpoint) is difficult to do. Which I don't happen to agree with. But it seems to be conventional wisdom to do this, so I'm trying to find out why.
> But don't DTOs also constitute a client coupling, specifically to the
> parameters required to complete a particular use case?
True, but using DTO's only couples the client to the representation and not the domain model itself
> In other words, if the use case evolves so that it now requires another
> input ("would you like a pastry with your coffee, sir?") then the client
> will still break, won't it?
Not necessarily, provided the design is right. In your example whats changing is the state machine and not the resource itself. So in your example the server would drive a different workflow using the same resource primitives.
> Isn't another alternative to return a model that is a metamodel, that
> fully describes the model being represented.
>
The meta model for the model isnt really a great idea (refer Mikes response on hypermedia as the source for shared understanding). However it is useful to have a meta model for "resource transitions", that are conventions based on shared understanding if you will, that can drive clients using hypermedia without coupling them to the server. Good examples of that in the RESTbucks article or Web Intents.
BTW @Milke +1... you're the Jon Skeet of this group :)
Regards
-Dilip Krishnan
>
> Thoughts?
>
> Dan
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
--- In rest-discuss@yahoogroups.com, Dan Haywood <dan@...> wrote: > > On 28/09/2011 08:52, Craig McClanahan wrote: > > A couple of things I keep in mind when designing REST APIs: > > > > * For CRUD-based functionality, it is not uncommon for there to be a > > relatively 1:1 mapping > > between available CRUD operations and corresponding web service > > endpoints (using > > appropriate HTTP verbs, of course). > > > > * For more complex workflows, like a shopping cart, the REST API should > > be defined > > as a state machine (from the client's viewpoint), which might be > > totally divorced from > > the internal functionality on the server. > > Can you explain why a more complex workflow should be modelled in this > way? Why does the client's viewpoint need to be divorced from the state > of the entities? What does it buy me (and what are the trade-offs)? > > > > > > Rickard's biggest complaint in the article you reference is that > > many/most APIs that claim to be RESTful do not actually obey the > > hypermedia (a.k.a. HATEOAS) constraint, and include URIs for the client > > to use for initiating state changes. In the shopping cart case, for > > example, the representation returned to the user should include the > > current state of the cart (to obey the statelessness constraint), > > *and* a URI for the client to use for initiating a checkout operation. > > That article didn't seem to make any compelling reason to go with use > cases, though. The only thing I read was that figuring out which links > to expose (from an authorization viewpoint) is difficult to do. Which I > don't happen to agree with. > > But it seems to be conventional wisdom to do this, so I'm trying to find > out why. >
Sorry for the last blank post! This is what I tried to send: > The only thing I read was that figuring out which links > to expose (from an authorization viewpoint) is difficult to do. Which I > don't happen to agree with. > > But it seems to be conventional wisdom to do this, so I'm trying to find > out why. The reason for supplying links, is that the server tells the client where to go next instead of hardwiring the expected URL into the client. It can be for authorization reasons but it need not be. Example: You do a GET on a URL and fetch a blog entry. Now you want to post a comment to that blog entry. Where do you do that? You could hardwire the "Post comments here"-URL (template) into the client - or you could let the client look for links attributed with "Post comments here". In this way the server is free to change its URLs depending on its own needs and without reconfiguring the clients. /Jørn
> > * For more complex workflows, like a shopping cart, the REST API should > > be defined > > as a state machine (from the client's viewpoint) > Can you explain why a more complex workflow should be modelled in this way? Same thing as with the blog comment I mentioned in the previous post: let the shopping cart include a "Place order here"-URL - that describes the "state machine's" possible transitions. There might as well be a "Post content of shopping cart into my wishlist here"-URL. And so on ... /Jørn
On 28/09/2011 13:29, Jorn Wildt wrote: > > > > The only thing I read was that figuring out which links > > to expose (from an authorization viewpoint) is difficult to do. Which I > > don't happen to agree with. > > > > But it seems to be conventional wisdom to do this, so I'm trying to > find > > out why. > > The reason for supplying links, is that the server tells the client > where to go next instead of hardwiring the expected URL into the > client. It can be for authorization reasons but it need not be. > Thanks for the reply, but that's not the question I was asking. I'm happy that there should be links/forms/hypermedia controls. My question is why expose use cases/view models rather than entities. Dan
> My question is why expose use cases/view models rather than entities. I think that depends on how you interpret "exposing entities". If exposing entities is a 1-to-1 XML serialization of the objects your system uses internally then you cannot change your system without the risk of breaking the clients. If, on the other hand, you first transform your internal objects to a intermediate object type and serialize that, then you can make internal changes to your entities, while updating the transformation, such that the result is stable on the outside. Of course, you don't have to serialize any kind of objects, it might as well be filling out templates MVC-style with data from your entities. Or any other transformation from entities to a stable public format. /Jørn --- In rest-discuss@yahoogroups.com, Dan Haywood <dan@...> wrote: > > On 28/09/2011 13:29, Jorn Wildt wrote: > > > > > > > The only thing I read was that figuring out which links > > > to expose (from an authorization viewpoint) is difficult to do. Which I > > > don't happen to agree with. > > > > > > But it seems to be conventional wisdom to do this, so I'm trying to > > find > > > out why. > > > > The reason for supplying links, is that the server tells the client > > where to go next instead of hardwiring the expected URL into the > > client. It can be for authorization reasons but it need not be. > > > Thanks for the reply, but that's not the question I was asking. > > I'm happy that there should be links/forms/hypermedia controls. My > question is why expose use cases/view models rather than entities. > > Dan >
On Wed, Sep 28, 2011 at 1:51 PM, Erik Mogensen <erik@...> wrote: > ** > >> I'm looking for a Java library which can evaluate a URI against a given >> URI template. What would you recommend? Basically I need to check whether a >> URI, matches with a given URI template and extract the variable values etc. >> >> Please be aware that the URI template specification has been written with > the opposite process in mind: The draft describes "the process > for expanding a URI Template into a URI reference." > > This may cause you problems like the need for disambiguation, since a > single URI may match several templates, with no way of knowing which > template was used to create the URI in question. > Agreed but if you provide a set of templates and a concrete URI then you can use unification to find the best match (maybe with some basic conflict resolution - such as take the longest match). We want to use URI templates for both directions .. for matching (and then routing) and of course for minting URIs. Sanjiva. -- Sanjiva Weerawarana, Ph.D. Founder, Director & Chief Scientist; Lanka Software Foundation; http://www.opensource.lk/ Founder, Chairman & CEO; WSO2; http://wso2.com/ Founder & Director; Thinkcube Systems; http://www.thinkcube.com/ Member; Apache Software Foundation; http://www.apache.org/ Visiting Lecturer; University of Moratuwa; http://www.cse.mrt.ac.lk/ Blog: http://sanjiva.weerawarana.org/
On Wed, Sep 28, 2011 at 1:36 PM, Dan Haywood
<dan@...>wrote:
> In other words, if the use case evolves so that it now requires another
> input ("would you like a pastry with your coffee, sir?") then the client
> will still break, won't it?
>
>
If you require the input, and you haven't thought about evolvability up
front, then yes it will break.
However, you just need to provide a way out (e.g. a way to bypass the pastry
question), then old clients would just ignore the new input, and happily
complete their task. The client has to be programmed with a goal in mind,
e.g. order coffee. Introducing the question of "would you like pastry ..."
could be done if-and-only-if some consideration was done up-front:
e.g. Let's say that (when you designed the original media type) that you
specified that "if a response has a <link
rel="I-dont-understand-the-question-just-let-me-continue" href="..."/> then
if your client gets stuck, following that link to continue the process".
Now, if you need to introduce a new step in a backwards compatible way, you
could serve them this new "question" and include the link. Clients would
hop to the next question.
Back to the topic: I don't think
"I-dont-understand-the-question-just-let-me-continue" is part of any
server's conceptual model or "entities". A DTO might not either. These
things need to be figured out early on! Evolvability.
It's a bit like how HTML has an explicit "ignore any unknown tag, and
process its children". It allows constructs such as
<video><object><embed><a>Wow</a></embed></object></video> to work for screen
scrapers ("Wow"), HTML 2.0 browsers (<a>) and so on to function as well as
they're able to.
--
-mogsie-
>> Seems like "soap"-creep to me ... a la WS-addressing :) > > Why in particular? Because the representation is no longer cacheable based on the resource identity/uri and caching headers alone. The resource identity is now a combination of UA and uri. This makes it complicated for the intermediary and feels kind of WS-*ish to me, specifically reminds me of WS-addressing with endpoint references, reference parameters and properties. > >> >> >> One thing that doesn't feel right to me is the fact that the client is driving control flow. > > Is that any different from redirecting a mobile UA to a dedicated server? It is a little different in the sense that the server responds with a different action to a given request. The intermediary is causing a different "version" of the representation to be rendered. > > I just remembered the Accept_Features header, which is somehow related: http://www.ietf.org/rfc/rfc2295.txt > > Jan > > > >> Also reminds of a Subbu's post on Media Types, Plumbing and Democracy, in the issues that it brings up. I do like the parametrized media types but its not widely supported; (atleast the last time I checked) >> >> Regards, >> Dilip Krishnan >> dilip.krishnan@... >> >> >> >> On Sep 15, 2011, at 5:46 PM, Jan Algermissen wrote: >> >>> >>> On Sep 16, 2011, at 12:20 AM, Erik Mogensen wrote: >>> >>>> On Thu, Sep 15, 2011 at 7:54 PM, Jan Algermissen <jan.algermissen@...> wrote: >>>> I think using the UserAgent header to negotiate representation features is a nice solution for non-human targeted situations, too. Essentially this means to negotiate incompatible media types based on Accept and the compatible variations in a given media type based on UserAgent. This might include the addition of certain Link headers. >>>> >>>> I agree with the "Avoid avoid" here. It sounds a lot like out-of-band knowledge, and in the opposite direction than is usual. "I know that V3.1.3 of FooClient doesn't use the xyzzy link relation, so I'll just save bandwidth by not sending it". The server could end up with a large amount of knowledge. See wurfl for an example of this going bad. >>>> >>>> Why not stick to Accept (which is that it's for) and use media type parameters? >>>> >>>> Accept: application/procurement;hypermediacontrols=new;q=1, application/procurement;hypermediacontrols=old;q=0.3 >>> >>> Been there, done that :-) Too enteprisey for my taste. Leads to packaging up feature sets into version numbers. I'd always let the Accept express the general capability of the client. (Which I read as: "If you hand me application/atom I am pretty sure I can fulfill my user's intent from there") Versioning makes the whole elegance go away :-) >>> >>> I was thinking more in terms of 'hints' or 'best effort'. Sending mobile devices different content (incl. bu way of a redirect) seems ok. So why not "direct that old legacy product we sold 5 years ago to that old (set of) servers we keep around for these cases"? >>> >>> My original issue, BTW, was routing requests in a load balancer to direct clients that need a new feature to those services that have the new feature installed and are already up and running. In general: situations where you have scaled to N services behind one IP that run different versions of a service (e.g. because they are being upgraded one by one while keeping the site in operation). >>> >>> Jan >>> >>>> >>>> "new" and "old" are obviously bad choices, but you get the idea. That way the media type can express the different "variants" within compatibility, and clients can express their capabilities in a header designed for conneg. Evolvability FTW! >>> >>> >>> >>>> -- >>>> -mogsie- >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >> >> >> >
On Sep 28, 2011, at 3:41 PM, Dilip Krishnan wrote: > > >>> Seems like "soap"-creep to me ... a la WS-addressing :) >> >> Why in particular? > > Because the representation is no longer cacheable based on the resource identity/uri and caching headers alone. The resource identity is now a combination of UA and uri. This makes it complicated for the intermediary HTTP 1.1 has this kind of behavior built-in. Caches must honor the Vary header which tells the cache what headers the selection of the representation depended upon. Accept, Accept-Language and Accept-Charset are the most common ones. But a Vary: User-Agent would by equally suitable. Jan > and feels kind of WS-*ish to me, specifically reminds me of WS-addressing with endpoint references, reference parameters and properties. > >> >>> >>> >>> One thing that doesn't feel right to me is the fact that the client is driving control flow. >> >> Is that any different from redirecting a mobile UA to a dedicated server? > > It is a little different in the sense that the server responds with a different action to a given request. The intermediary is causing a different "version" of the representation to be rendered. > >> >> I just remembered the Accept_Features header, which is somehow related: http://www.ietf.org/rfc/rfc2295.txt >> >> Jan >> >> >> >>> Also reminds of a Subbu's post on Media Types, Plumbing and Democracy, in the issues that it brings up. I do like the parametrized media types but its not widely supported; (atleast the last time I checked) >>> >>> Regards, >>> Dilip Krishnan >>> dilip.krishnan@... >>> >>> >>> >>> On Sep 15, 2011, at 5:46 PM, Jan Algermissen wrote: >>> >>>> >>>> On Sep 16, 2011, at 12:20 AM, Erik Mogensen wrote: >>>> >>>>> On Thu, Sep 15, 2011 at 7:54 PM, Jan Algermissen <jan.algermissen@...> wrote: >>>>> I think using the UserAgent header to negotiate representation features is a nice solution for non-human targeted situations, too. Essentially this means to negotiate incompatible media types based on Accept and the compatible variations in a given media type based on UserAgent. This might include the addition of certain Link headers. >>>>> >>>>> I agree with the "Avoid avoid" here. It sounds a lot like out-of-band knowledge, and in the opposite direction than is usual. "I know that V3.1.3 of FooClient doesn't use the xyzzy link relation, so I'll just save bandwidth by not sending it". The server could end up with a large amount of knowledge. See wurfl for an example of this going bad. >>>>> >>>>> Why not stick to Accept (which is that it's for) and use media type parameters? >>>>> >>>>> Accept: application/procurement;hypermediacontrols=new;q=1, application/procurement;hypermediacontrols=old;q=0.3 >>>> >>>> Been there, done that :-) Too enteprisey for my taste. Leads to packaging up feature sets into version numbers. I'd always let the Accept express the general capability of the client. (Which I read as: "If you hand me application/atom I am pretty sure I can fulfill my user's intent from there") Versioning makes the whole elegance go away :-) >>>> >>>> I was thinking more in terms of 'hints' or 'best effort'. Sending mobile devices different content (incl. bu way of a redirect) seems ok. So why not "direct that old legacy product we sold 5 years ago to that old (set of) servers we keep around for these cases"? >>>> >>>> My original issue, BTW, was routing requests in a load balancer to direct clients that need a new feature to those services that have the new feature installed and are already up and running. In general: situations where you have scaled to N services behind one IP that run different versions of a service (e.g. because they are being upgraded one by one while keeping the site in operation). >>>> >>>> Jan >>>> >>>>> >>>>> "new" and "old" are obviously bad choices, but you get the idea. That way the media type can express the different "variants" within compatibility, and clients can express their capabilities in a header designed for conneg. Evolvability FTW! >>>> >>>> >>>> >>>>> -- >>>>> -mogsie- >>>> >>>> >>>> >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>> >>> >>> >> > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Mike, I have no issues getting webkit rendering application/xhtml+xml with chrome and safari. Where are you seeing the problem? -----Original Message----- From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of mike amundsen Sent: 16 September 2011 08:16 To: Jan Algermissen Cc: Dilip Krishnan; Erik Mogensen; REST Discuss Subject: Re: [rest-discuss] Conneg based on User-Agent Jan: Not sure this is the same thing, but I have code that handles "exceptions" for conneg match results and those exceptions are based on UA reporting. here are some real-world examples: WebKit conneg will always pick any XML variant over any HTML variant offered. IOW, when given that chance, WebKit conneg results in "give me XML" Turns out WebKit does not _render_ the XML (just shows a blank page). I add an exception to make sure WebKit browsers (not XMLHttpRequest) get HTML if it's available. Mirosoft Excel conneg will favor HTML (assuming a table) over CSV. I add an exception so that MS-Excel clients get CSV if it is available. There are (I think) some others, but those are ones that come up quite often and how i deal with them. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Fri, Sep 16, 2011 at 08:30, Jan Algermissen <jan.algermissen@...> wrote: > > On Sep 16, 2011, at 5:00 AM, Dilip Krishnan wrote: > >> Seems like "soap"-creep to me ... a la WS-addressing :) > > Why in particular? > >> >> >> One thing that doesn't feel right to me is the fact that the client is driving control flow. > > Is that any different from redirecting a mobile UA to a dedicated server? > > .... > > I just remembered the Accept_Features header, which is somehow > related: http://www.ietf.org/rfc/rfc2295.txt > > Jan > > > >> Also reminds of a Subbu's post on Media Types, Plumbing and >> Democracy, in the issues that it brings up. I do like the >> parametrized media types but its not widely supported; (atleast the >> last time I checked) >> >> Regards, >> Dilip Krishnan >> dilip.krishnan@... >> >> >> >> On Sep 15, 2011, at 5:46 PM, Jan Algermissen wrote: >> >>> >>> On Sep 16, 2011, at 12:20 AM, Erik Mogensen wrote: >>> >>>> On Thu, Sep 15, 2011 at 7:54 PM, Jan Algermissen <jan.algermissen@...> wrote: >>>> I think using the UserAgent header to negotiate representation features is a nice solution for non-human targeted situations, too. Essentially this means to negotiate incompatible media types based on Accept and the compatible variations in a given media type based on UserAgent. This might include the addition of certain Link headers. >>>> >>>> I agree with the "Avoid avoid" here. It sounds a lot like out-of-band knowledge, and in the opposite direction than is usual. "I know that V3.1.3 of FooClient doesn't use the xyzzy link relation, so I'll just save bandwidth by not sending it". The server could end up with a large amount of knowledge. See wurfl for an example of this going bad. >>>> >>>> Why not stick to Accept (which is that it's for) and use media type parameters? >>>> >>>> Accept: application/procurement;hypermediacontrols=new;q=1, >>>> application/procurement;hypermediacontrols=old;q=0.3 >>> >>> Been there, done that :-) Too enteprisey for my taste. Leads to >>> packaging up feature sets into version numbers. I'd always let the >>> Accept express the general capability of the client. (Which I read >>> as: "If you hand me application/atom I am pretty sure I can fulfill >>> my user's intent from there") Versioning makes the whole elegance go >>> away :-) >>> >>> I was thinking more in terms of 'hints' or 'best effort'. Sending mobile devices different content (incl. bu way of a redirect) seems ok. So why not "direct that old legacy product we sold 5 years ago to that old (set of) servers we keep around for these cases"? >>> >>> My original issue, BTW, was routing requests in a load balancer to direct clients that need a new feature to those services that have the new feature installed and are already up and running. In general: situations where you have scaled to N services behind one IP that run different versions of a service (e.g. because they are being upgraded one by one while keeping the site in operation). >>> >>> Jan >>> >>>> >>>> "new" and "old" are obviously bad choices, but you get the idea. That way the media type can express the different "variants" within compatibility, and clients can express their capabilities in a header designed for conneg. Evolvability FTW! >>> >>> >>> >>>> -- >>>> -mogsie- >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >> >> >> > > > > ------------------------------------ > > Yahoo! Groups Links > > > > ------------------------------------ Yahoo! Groups Links
i'd have to dig it up, but my recollection is that when the server has these possibilities for a resource: - text/html - application/xml - application/json and i am using the mimeparse library [1] on the server, and the client is Chrome browser (not sure if this the case for Safari, too)... the negotiated result is always the XML representation, not HTML. moreover, Chrome does not _render_ the XML on screen, just displays a blank (view source shows the XML). NOTE: my XML "plugin" can kick in to re-display the XML, but that's a "dev" case on my workstation. it's been a while since i slogged through this and i long ago added a "shim" that returns the HTML variant to Chrome browsers by overriding this quirk. possibly the override is no longer needed (due to changes in Chrome, changes in the MimeParse lib, or some bone-headed coding bug i am still carrying around). [1] http://code.google.com/p/mimeparse/ mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Wed, Sep 28, 2011 at 15:04, Sebastien Lambla <seb@...> wrote: > Mike, > > I have no issues getting webkit rendering application/xhtml+xml with chrome and safari. Where are you seeing the problem? > > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of mike amundsen > Sent: 16 September 2011 08:16 > To: Jan Algermissen > Cc: Dilip Krishnan; Erik Mogensen; REST Discuss > Subject: Re: [rest-discuss] Conneg based on User-Agent > > Jan: > > Not sure this is the same thing, but I have code that handles "exceptions" for conneg match results and those exceptions are based on UA reporting. > > here are some real-world examples: > > WebKit conneg will always pick any XML variant over any HTML variant offered. IOW, when given that chance, WebKit conneg results in "give me XML" Turns out WebKit does not _render_ the XML (just shows a blank page). I add an exception to make sure WebKit browsers (not > XMLHttpRequest) get HTML if it's available. > > Mirosoft Excel conneg will favor HTML (assuming a table) over CSV. I add an exception so that MS-Excel clients get CSV if it is available. > > There are (I think) some others, but those are ones that come up quite often and how i deal with them. > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > > > > On Fri, Sep 16, 2011 at 08:30, Jan Algermissen <jan.algermissen@...> wrote: >> >> On Sep 16, 2011, at 5:00 AM, Dilip Krishnan wrote: >> >>> Seems like "soap"-creep to me ... a la WS-addressing :) >> >> Why in particular? >> >>> >>> >>> One thing that doesn't feel right to me is the fact that the client is driving control flow. >> >> Is that any different from redirecting a mobile UA to a dedicated server? >> >> .... >> >> I just remembered the Accept_Features header, which is somehow >> related: http://www.ietf.org/rfc/rfc2295.txt >> >> Jan >> >> >> >>> Also reminds of a Subbu's post on Media Types, Plumbing and >>> Democracy, in the issues that it brings up. I do like the >>> parametrized media types but its not widely supported; (atleast the >>> last time I checked) >>> >>> Regards, >>> Dilip Krishnan >>> dilip.krishnan@... >>> >>> >>> >>> On Sep 15, 2011, at 5:46 PM, Jan Algermissen wrote: >>> >>>> >>>> On Sep 16, 2011, at 12:20 AM, Erik Mogensen wrote: >>>> >>>>> On Thu, Sep 15, 2011 at 7:54 PM, Jan Algermissen <jan.algermissen@...> wrote: >>>>> I think using the UserAgent header to negotiate representation features is a nice solution for non-human targeted situations, too. Essentially this means to negotiate incompatible media types based on Accept and the compatible variations in a given media type based on UserAgent. This might include the addition of certain Link headers. >>>>> >>>>> I agree with the "Avoid avoid" here. It sounds a lot like out-of-band knowledge, and in the opposite direction than is usual. "I know that V3.1.3 of FooClient doesn't use the xyzzy link relation, so I'll just save bandwidth by not sending it". The server could end up with a large amount of knowledge. See wurfl for an example of this going bad. >>>>> >>>>> Why not stick to Accept (which is that it's for) and use media type parameters? >>>>> >>>>> Accept: application/procurement;hypermediacontrols=new;q=1, >>>>> application/procurement;hypermediacontrols=old;q=0.3 >>>> >>>> Been there, done that :-) Too enteprisey for my taste. Leads to >>>> packaging up feature sets into version numbers. I'd always let the >>>> Accept express the general capability of the client. (Which I read >>>> as: "If you hand me application/atom I am pretty sure I can fulfill >>>> my user's intent from there") Versioning makes the whole elegance go >>>> away :-) >>>> >>>> I was thinking more in terms of 'hints' or 'best effort'. Sending mobile devices different content (incl. bu way of a redirect) seems ok. So why not "direct that old legacy product we sold 5 years ago to that old (set of) servers we keep around for these cases"? >>>> >>>> My original issue, BTW, was routing requests in a load balancer to direct clients that need a new feature to those services that have the new feature installed and are already up and running. In general: situations where you have scaled to N services behind one IP that run different versions of a service (e.g. because they are being upgraded one by one while keeping the site in operation). >>>> >>>> Jan >>>> >>>>> >>>>> "new" and "old" are obviously bad choices, but you get the idea. That way the media type can express the different "variants" within compatibility, and clients can express their capabilities in a header designed for conneg. Evolvability FTW! >>>> >>>> >>>> >>>>> -- >>>>> -mogsie- >>>> >>>> >>>> >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>> >>> >>> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Ah ok, not tested that scenario, any xhtml usually has html and xhtml conneg'd on the server, with the html having a heavier weight so that anyone sending in Accept: */* will get html, but anyone requesting xhtml specifically will get the xhtml version first :) -----Original Message----- From: mca@amundsen.com [mailto:mca@...] On Behalf Of mike amundsen Sent: 28 September 2011 20:40 To: Sebastien Lambla Cc: Jan Algermissen; Dilip Krishnan; Erik Mogensen; REST Discuss Subject: Re: [rest-discuss] Conneg based on User-Agent i'd have to dig it up, but my recollection is that when the server has these possibilities for a resource: - text/html - application/xml - application/json and i am using the mimeparse library [1] on the server, and the client is Chrome browser (not sure if this the case for Safari, too)... the negotiated result is always the XML representation, not HTML. moreover, Chrome does not _render_ the XML on screen, just displays a blank (view source shows the XML). NOTE: my XML "plugin" can kick in to re-display the XML, but that's a "dev" case on my workstation. it's been a while since i slogged through this and i long ago added a "shim" that returns the HTML variant to Chrome browsers by overriding this quirk. possibly the override is no longer needed (due to changes in Chrome, changes in the MimeParse lib, or some bone-headed coding bug i am still carrying around). [1] http://code.google.com/p/mimeparse/ mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Wed, Sep 28, 2011 at 15:04, Sebastien Lambla <seb@...> wrote: > Mike, > > I have no issues getting webkit rendering application/xhtml+xml with chrome and safari. Where are you seeing the problem? > > -----Original Message----- > From: rest-discuss@yahoogroups.com > [mailto:rest-discuss@yahoogroups.com] On Behalf Of mike amundsen > Sent: 16 September 2011 08:16 > To: Jan Algermissen > Cc: Dilip Krishnan; Erik Mogensen; REST Discuss > Subject: Re: [rest-discuss] Conneg based on User-Agent > > Jan: > > Not sure this is the same thing, but I have code that handles "exceptions" for conneg match results and those exceptions are based on UA reporting. > > here are some real-world examples: > > WebKit conneg will always pick any XML variant over any HTML variant > offered. IOW, when given that chance, WebKit conneg results in "give > me XML" Turns out WebKit does not _render_ the XML (just shows a blank > page). I add an exception to make sure WebKit browsers (not > XMLHttpRequest) get HTML if it's available. > > Mirosoft Excel conneg will favor HTML (assuming a table) over CSV. I add an exception so that MS-Excel clients get CSV if it is available. > > There are (I think) some others, but those are ones that come up quite often and how i deal with them. > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > > > > On Fri, Sep 16, 2011 at 08:30, Jan Algermissen <jan.algermissen@...> wrote: >> >> On Sep 16, 2011, at 5:00 AM, Dilip Krishnan wrote: >> >>> Seems like "soap"-creep to me ... a la WS-addressing :) >> >> Why in particular? >> >>> >>> >>> One thing that doesn't feel right to me is the fact that the client is driving control flow. >> >> Is that any different from redirecting a mobile UA to a dedicated server? >> >> .... >> >> I just remembered the Accept_Features header, which is somehow >> related: http://www.ietf.org/rfc/rfc2295.txt >> >> Jan >> >> >> >>> Also reminds of a Subbu's post on Media Types, Plumbing and >>> Democracy, in the issues that it brings up. I do like the >>> parametrized media types but its not widely supported; (atleast the >>> last time I checked) >>> >>> Regards, >>> Dilip Krishnan >>> dilip.krishnan@... >>> >>> >>> >>> On Sep 15, 2011, at 5:46 PM, Jan Algermissen wrote: >>> >>>> >>>> On Sep 16, 2011, at 12:20 AM, Erik Mogensen wrote: >>>> >>>>> On Thu, Sep 15, 2011 at 7:54 PM, Jan Algermissen <jan.algermissen@...> wrote: >>>>> I think using the UserAgent header to negotiate representation features is a nice solution for non-human targeted situations, too. Essentially this means to negotiate incompatible media types based on Accept and the compatible variations in a given media type based on UserAgent. This might include the addition of certain Link headers. >>>>> >>>>> I agree with the "Avoid avoid" here. It sounds a lot like out-of-band knowledge, and in the opposite direction than is usual. "I know that V3.1.3 of FooClient doesn't use the xyzzy link relation, so I'll just save bandwidth by not sending it". The server could end up with a large amount of knowledge. See wurfl for an example of this going bad. >>>>> >>>>> Why not stick to Accept (which is that it's for) and use media type parameters? >>>>> >>>>> Accept: application/procurement;hypermediacontrols=new;q=1, >>>>> application/procurement;hypermediacontrols=old;q=0.3 >>>> >>>> Been there, done that :-) Too enteprisey for my taste. Leads to >>>> packaging up feature sets into version numbers. I'd always let the >>>> Accept express the general capability of the client. (Which I read >>>> as: "If you hand me application/atom I am pretty sure I can fulfill >>>> my user's intent from there") Versioning makes the whole elegance >>>> go away :-) >>>> >>>> I was thinking more in terms of 'hints' or 'best effort'. Sending mobile devices different content (incl. bu way of a redirect) seems ok. So why not "direct that old legacy product we sold 5 years ago to that old (set of) servers we keep around for these cases"? >>>> >>>> My original issue, BTW, was routing requests in a load balancer to direct clients that need a new feature to those services that have the new feature installed and are already up and running. In general: situations where you have scaled to N services behind one IP that run different versions of a service (e.g. because they are being upgraded one by one while keeping the site in operation). >>>> >>>> Jan >>>> >>>>> >>>>> "new" and "old" are obviously bad choices, but you get the idea. That way the media type can express the different "variants" within compatibility, and clients can express their capabilities in a header designed for conneg. Evolvability FTW! >>>> >>>> >>>> >>>>> -- >>>>> -mogsie- >>>> >>>> >>>> >>>> ------------------------------------ >>>> >>>> Yahoo! Groups Links >>>> >>>> >>>> >>> >>> >>> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > > > ------------------------------------ > > Yahoo! Groups Links > > > >
>
>> But don't DTOs also constitute a client coupling, specifically to the
>> parameters required to complete a particular use case?
>
> True, but using DTO's only couples the client to the representation and
> not the domain model itself
>
So if I understand, you are basically saying that DTOs act as a
projection of the entity and therefore address versioning issues if the
entity changes its structure. I can indeed see that this is an issue if
the RESTful client is hard-wired to render particular information from
specific representations.
On the other hand, a RESTful client could be written in a more generic
fashion such that it would be resilient to changes (in the same way that
a web browser can render any HTML page, and doesn't care how the page's
content changes over time).
For such clients, it wouldn't be necessary to create DTO projections
from entities, would it?
>> In other words, if the use case evolves so that it now requires another
>> input ("would you like a pastry with your coffee, sir?") then the client
>> will still break, won't it?
>
> Not necessarily, provided the design is right. In your example whats
> changing is the state machine and not the resource itself. So in your
> example the server would drive a different workflow using the same
> resource primitives.
Understood.
At the risk of revisiting debates that's have been had here many times
before (ie: please bear with me), I wonder, whether Roy intended REST
resources to represent application state, ie use cases? I ask because
in the comments to his HATEOAS blog post he says:
"Don�t confuse application state (the state of the user�s application of
computing to a given task) with resource state (the state of the world
as exposed by a given service). They are not the same thing."
To me this suggests that designing resources that represent a single
user's application state isn't at all what Roy intended.
FWIW, I would have thought that using code-on-demand to track
application state is more in keeping with the REST principles?
>
> Good
> examples of that in the RESTbucks
> <http://www.infoq.com/articles/webber-rest-workflow> article or Web
> Intents <http://webintents.org/>.
>
Thanks for these links. NB: the webber article is where I followed the
link through to Roy's comment. The webintents does sound like an
interesting idea, though.
-- Dan
I just wondered why there is no "item" relation (ore something similar) registered at [1] with the semantics of "this is an item of this collection resource". There is one e.g. for the first item (start). I´ve seen some examples around the web - recently [2] - using a proprietary "item" relation. I would like to register this relation, but i could not see any link to the registration form/procedure. Furthermore, i don´t know if a registration would be accepted without "item" being a part of some RFC (like all the other relation types in the registry...) The use case for this is: you could return a collection of links instead of embedded resources for a "collection resource". [1] http://www.iana.org/assignments/link-relations/link-relations.xml [2] http://stateless.co/hal_specification.html#examples
Jakob: I have an I-D in process now: http://tools.ietf.org/html/draft-amundsen-item-and-collection-link-relations-02 Feel free to make comments/suggestions here: http://www.ietf.org/mail-archive/web/link-relations/current/msg00270.html FWIW - this I-D stalled while I was away on holiday in September. I plan on pressing forward ASAP. Any and all comments will be appreaciated. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Sun, Oct 2, 2011 at 09:45, Jakob Strauch <jakob.strauch@...> wrote: > I just wondered why there is no "item" relation (ore something similar) registered at [1] with the semantics of "this is an item of this collection resource". There is one e.g. for the first item (start). I´ve seen some examples around the web - recently [2] - using a proprietary "item" relation. > > I would like to register this relation, but i could not see any link to the registration form/procedure. Furthermore, i don´t know if a registration would be accepted without "item" being a part of some RFC (like all the other relation types in the registry...) > > The use case for this is: you could return a collection of links instead of embedded resources for a "collection resource". > > [1] http://www.iana.org/assignments/link-relations/link-relations.xml > [2] http://stateless.co/hal_specification.html#examples > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
I need to update a single field of a resource and I think to PUT the new field value in the URL parameter leaving empty request body. Is it an acceptable practice? Thanks.
Not really, if you want to process partials then use PATCH or POST on the full resource as it makes the interaction clearer On Mon, Oct 3, 2011 at 3:09 PM, mark69_fnd <mark.kharitonov@...>wrote: > I need to update a single field of a resource and I think to PUT the new > field value in the URL parameter leaving empty request body. > > Is it an acceptable practice? > > Thanks. > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Oct 3, 2011, at 4:09 PM, mark69_fnd wrote: > I need to update a single field of a resource and I think to PUT the new field value in the URL parameter leaving empty request body. > > Is it an acceptable practice? No, because the URI identifies the target of the request. It is not to be abused for carrying payload. Besides what Mike said, you can also define a resource for the 'field' and PUT to that: GET /customer/3 200 Ok ... <customer> <name>..</name> <address href="./address"> address data </address> </customer> PUT /customer/3/address <address> address data </address> Note that the example lacks media types completely which you need to define the data format and the link semantics. Jan > > Thanks. > >
On Mon, Oct 3, 2011 at 8:34 AM, Jan Algermissen <jan.algermissen@...> wrote: > > On Oct 3, 2011, at 4:09 PM, mark69_fnd wrote: > >> I need to update a single field of a resource and I think to PUT the new field value in the URL parameter leaving empty request body. >> >> Is it an acceptable practice? > > No, because the URI identifies the target of the request. It is not to be abused for carrying payload. > > Besides what Mike said, you can also define a resource for the 'field' and PUT to that: > > GET /customer/3 > > 200 Ok > ... > > <customer> > <name>..</name> > <address href="./address"> address data </address> > </customer> > > PUT /customer/3/address > > <address> address data </address> > > Note that the example lacks media types completely which you need to define the data format and the link semantics. Yea, basically what Jan said. Mike was addressing that the resource wasn't complete and that there are better mechanism for updating a partial resource. Jan is addressing that query parameters are simply not a valid representation for any resource. URLs are names, and query parameters are part of URLs. Regards, Will Hartung (willh@...)
On 2011-10-02 15:45, Jakob Strauch wrote: > ... > I would like to register this relation, but i could not see any link to > the registration form/procedure. Furthermore, i don�t know if a > registration would be accepted without "item" being a part of some RFC > (like all the other relation types in the registry...) > ... The registry page links to RFC 5988, and that defines the registry procedure; see <https://tools.ietf.org/html/rfc5988#section-6.2.1>. Best regards, Julian
Hi, can anyone point me to a RESTful equivalent (in Java, too) of something like Apache Wicket or JSF? 'Equivalent' in terms of maturity and rapid development. Jan
On Tue, Oct 4, 2011 at 7:40 AM, Jan Algermissen <jan.algermissen@...>wrote: > ** > > > Hi, > > can anyone point me to a RESTful equivalent (in Java, too) of something > like Apache Wicket or JSF? > > 'Equivalent' in terms of maturity and rapid development. > What exactly are you looking for. Basically a stateless, Java Componet framework with reasonable URL control? Regards, Will Hartung (willh@...)
On Oct 4, 2011, at 6:10 PM, Christopher Currie wrote: > The relevant JSR is JSR-311: JAX-RS. :-) Erm, no .... I am looking not for the REST part, but for the shiny-widgets and integrated development stuff. You know, hook that button to that method of that class etc.. Jan > The reference implementation, Jersey, is high quality and very stable: > > http://jersey.java.net > > There are multiple other implementations of JSR-311 available, including Apache CXF, RESTEasy (JBoss), and Restlet (an extension, not by default). I haven't tried any of them, so I can't speak to their maturity. > > HTH, > Christopher > > On Tue, Oct 4, 2011 at 7:40 AM, Jan Algermissen <jan.algermissen@...> wrote: > > Hi, > > can anyone point me to a RESTful equivalent (in Java, too) of something like Apache Wicket or JSF? > > 'Equivalent' in terms of maturity and rapid development. > > Jan > > >
On Oct 4, 2011, at 6:52 PM, Will Hartung wrote: > > > On Tue, Oct 4, 2011 at 7:40 AM, Jan Algermissen <jan.algermissen@...> wrote: > > Hi, > > can anyone point me to a RESTful equivalent (in Java, too) of something like Apache Wicket or JSF? > > 'Equivalent' in terms of maturity and rapid development. > > > What exactly are you looking for. Basically a stateless, Java Componet framework with reasonable URL control? Mostly for sth that provides the good stuff of JSF or Widget (the GUI-building ease) and avoids session crap, POST-based retrieval horror and the like. Currently taking a look at Play Framework which advertises itself promisingly. Jan > > Regards, > > Will Hartung > (willh@...) >
In (a follow-on comment to his own) blog post [1], Roy stated: "Don't confuse application state (the state of the user's application of computing to a given task) with resource state (the state of the world as exposed by a given service). They are not the same thing." To my naive way of thinking, this suggests that use cases (which represent an individual users journey through the system) should not be represented as use cases. In contrast, entities clearly do representing "the state of the world", and so would seem perfectly fine to be considered as resources. (OK, they might need wrapping up in DTOs or projections to allow client/server to evolve differently, but that's a different point). Anyway, my question is: can someone unpack Roy's statement for me? What is the difference between what Roy calls "application state" (which he says isn't a resource) vs use case state (which may here seem to consider is perfectly legit as a resource). thx Dan [1] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-744
On Thu, Oct 6, 2011 at 4:00 PM, Dan <dan@...> wrote: > ** > > > In (a follow-on comment to his own) blog post [1], Roy stated: > > "Don't confuse application state (the state of the user's application of > computing to a given task) with resource state (the state of the world as > exposed by a given service). They are not the same thing." > > Anyway, my question is: can someone unpack Roy's statement for me? What is > the difference between what Roy calls "application state" (which he says > isn't a resource) vs use case state (which may here seem to consider is > perfectly legit as a resource). > Resource state is "I have FM Radios, they're blue, andI have 10 of them" Application state is "I have an FM Radio, toothbrush, box of crayons, and quart of penzoil in my shopping cart". The system (in this case) doesn't care or know about "shopping carts". It only cares about items and orders. Your application may be kind enough to accumulate stuff in to a cart, but when you place the order, the system takes an entire order (all of the items) all at once (and deal with any issues that the system may have with your request, such as out of stock or back orders ,or whatever). But all the picking, browsing, searching, last item seen, etc. Those are parts of the application, part of the user interface. Regards, Will Hartung (willh@...)
Dan: i've worked on this idea (app state vs. resource state) a number of times and will offer this the follow as a way to start your own way of thinking about it. First (again) Fielding's comment: "Don’t confuse application state (the state of the user’s application of computing to a given task) with resource state (the state of the world as exposed by a given service). They are not the same thing." - Fielding, blog post 2008 First-level reduction: "Don't confuse ... the state of the user's application ... with ... the state of ... a given service." A refinement via assumption: user's application === browser given service === server Yields: Don't confuse the state of the browser with the state of the server This is especially true if you keep in mind that the server likely is exposing a unique "state of the world" for each user interacting with that service. Just a thought... mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Thu, Oct 6, 2011 at 19:00, Dan <dan@haywood-associates.co.uk> wrote: > In (a follow-on comment to his own) blog post [1], Roy stated: > > "Don't confuse application state (the state of the user's application of computing to a given task) with resource state (the state of the world as exposed by a given service). They are not the same thing." > > To my naive way of thinking, this suggests that use cases (which represent an individual users journey through the system) should not be represented as use cases. > > In contrast, entities clearly do representing "the state of the world", and so would seem perfectly fine to be considered as resources. (OK, they might need wrapping up in DTOs or projections to allow client/server to evolve differently, but that's a different point). > > Anyway, my question is: can someone unpack Roy's statement for me? What is the difference between what Roy calls "application state" (which he says isn't a resource) vs use case state (which may here seem to consider is perfectly legit as a resource). > > thx > Dan > > [1] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-744 > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Thu, Oct 6, 2011 at 7:00 PM, Dan <dan@...> wrote: > Anyway, my question is: can someone unpack Roy's statement for me? What is the difference between what Roy calls "application state" (which he says isn't a resource) vs use case state (which may here seem to consider is perfectly legit as a resource). I don't consider Roy's description opaque, but perhaps that's just me. But consider a banking application; the balance of an account is resource state, while the fact that the browser is currently showing the balance (versus, say, showing the bill payment form) is application state. Mark.
We're hoping to support versioning in our API through media types[1] and I just want to clarify how this would work in practice. Whereas URIs are generated by the server, media types must be supplied by the client and so presumably have to come from the media type documentation. The documentation would have to define the media types available for the API entrypoint (vnd.corp.app.Api, vnd.corp.app.Api-v2) and also the media types that can be requested when following hyperlinks from those representations. Effectively the client has to know that when following hyperlink X from media type Y, it has to request media type Z. When clients are general enough to work on any media types provided by the API they can presumably request more general application/XML or application/JSON media types and be supplied with the most recent version of the media type served by each URI. Does this sound right? Cheers, Jim [1] http://barelyenough.org/blog/2008/05/versioning-rest-web-services/
On Oct 7, 2011, at 12:09 PM, Jim Purbrick wrote: > We're hoping to support versioning in our API through media types[1] > and I just want to clarify how this would work in practice. > > Whereas URIs are generated by the server, media types must be supplied > by the client and so presumably have to come from the media type > documentation. > > The documentation would have to define the media types available for > the API entrypoint (vnd.corp.app.Api, vnd.corp.app.Api-v2) and also > the media types that can be requested when following hyperlinks from > those representations. Ina sense, yes. Though this is more circumstantial knowledge about your domain. E.g. there is *nothing* that specifies that images linked to from HTML <img> tags comes as media types image/* nevertheless, (most) browsers say Accept: image/* when they request the target of the src attribute of an <img> Tag. Likewise with stylesheets. It makes perfect sense to not document these things as part of the API (of course not!) nor as part of link semantics (do *not* say that foo links point to resources that come as application/bar). Let that be common knowledge communicated by human means. There is a lot of common sense involved also (who would implement an Accept: audio/* for <img> tags when image/* is known to exist). Yes, this is arguably very loose, but we are looking for loose coupling, aren't we? :-) [The issue behind all this is that in networked systems you cannot control what the server will do tomorrow anyhow, so do not try and think you can). > > Effectively the client has to know that when following hyperlink X > from media type Y, it has to request media type Z. See above. The better way think is: "Given my current intent, what are the media types I know that allow me to fulfill that intent?" That is what governs the selection of what to put in the accept header. Also, be prepared for the 406 and make the most out of the body that comes with the 406. E.g. use it to inform whoever will be tasked with fixing things. > > When clients are general enough to work on any media types provided by > the API they can presumably request more general application/XML or > application/JSON media types and be supplied with the most recent > version of the media type served by each URI. XNMl and JSON are insufficient media types. They are not specific enough because all they tell the recipient is that it can parse the message as XML or JSON. Mint your own types - and register them with IANA when you expose them to the public. > > Does this sound right? In part - anyhow, this is a fascinating and rewarding aspect of REST to explore. To me it was very rewarding to ask: "How does the server developer know, what to send?" (and what changes are allowed when evolving the server) "How does the client developer know, what to expect?" Always keeping in mind that REST is the solution to the problem that client and server developer cannot communicate[1]. Jan [1] Amazon cannot contact all customers for an API change. > > Cheers, > > Jim > > [1] http://barelyenough.org/blog/2008/05/versioning-rest-web-services/ >
On Oct 7, 2011, at 1:07 PM, Jan Algermissen wrote: > Though this is more circumstantial knowledge about your domain s/circumstantial/general/ [ Looked up the word which sounded right to a non-native but turned out to be totally wrong. Sorry. ] Jan
Application state is defined by a representation that is handed to a user agent. It represents a snapshot of the clients narrow view of the world as it navigates around. With each requests, a small subset of the overall server state is viewed and certain transitions to other application states are offered. For example, it might be a page with a list of 25 mailing list posts. It may have links labelled with relations like "next" and "previous" or a way to POST a new submission to be added to the collection. The current representation being processed reflects the server's communication about its resource state at a moment in time when the server answered a request for a resource in whatever media type. While you are staring at a list of items on the screen of your browser, someone else may have posted a new mail to the list. Or the site's owner may have redacted an email because it was spam. Or changed a configuration setting such as the default number of items shown on a page or a banner message or ad image. These are changes to resource state. They affect how the server might produce a representation for a future request. --- In rest-discuss@yahoogroups.com, "Dan" <dan@...> wrote: > > In (a follow-on comment to his own) blog post [1], Roy stated: > > "Don't confuse application state (the state of the user's application of computing to a given task) with resource state (the state of the world as exposed by a given service). They are not the same thing."
Just use "self" within the context of a child item:
<things>
<thing id="1" name="thing1">
<atom:link rel="self" href="http://example.com/things/1"/>
</thing>
<thing id="2" name="thing2">
<atom:link rel="self" href="http://example.com/things/2"/>
</thing>
</things>
--- In rest-discuss@yahoogroups.com, "Jakob Strauch" <jakob.strauch@...> wrote:
>
> I just wondered why there is no "item" relation (ore something similar) registered at [1] with the semantics of "this is an item of this collection resource".
Here's an explanation by Roy: http://lists.w3.org/Archives/Public/www-tag/2010Oct/0100.html Ivan On Fri, Oct 7, 2011 at 01:00, Dan <dan@haywood-associates.co.uk> wrote: > ** > > > In (a follow-on comment to his own) blog post [1], Roy stated: > > "Don't confuse application state (the state of the user's application of > computing to a given task) with resource state (the state of the world as > exposed by a given service). They are not the same thing." > > To my naive way of thinking, this suggests that use cases (which represent > an individual users journey through the system) should not be represented as > use cases. > > In contrast, entities clearly do representing "the state of the world", and > so would seem perfectly fine to be considered as resources. (OK, they might > need wrapping up in DTOs or projections to allow client/server to evolve > differently, but that's a different point). > > Anyway, my question is: can someone unpack Roy's statement for me? What is > the difference between what Roy calls "application state" (which he says > isn't a resource) vs use case state (which may here seem to consider is > perfectly legit as a resource). > > thx > Dan > > [1] > http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-744 > > >
As far as i understand link relations, they refer to the context of
the resource URI. From [1]:
A link can be viewed as a statement of the form "{context IRI} has a
{relation type} resource at {target IRI}, which has {target
attributes}."
In your example, does "self" not refer to the resource´s context (the collection)?
[1] http://tools.ietf.org/html/draft-nottingham-http-link-header-10#page-4
--- In rest-discuss@yahoogroups.com, "bryan_w_taylor" <bryan_w_taylor@...> wrote:
>
>
>
> Just use "self" within the context of a child item:
>
> <things>
> <thing id="1" name="thing1">
> <atom:link rel="self" href="http://example.com/things/1"/>
> </thing>
> <thing id="2" name="thing2">
> <atom:link rel="self" href="http://example.com/things/2"/>
> </thing>
> </things>
>
> --- In rest-discuss@yahoogroups.com, "Jakob Strauch" <jakob.strauch@> wrote:
> >
> > I just wondered why there is no "item" relation (ore something similar) registered at [1] with the semantics of "this is an item of this collection resource".
>
If I remember correctly, the Context IRI is definined by the Media Type.
Atom (RFC 4287) defines the containing element as context (" The value
"self" signifies that the IRI in the value of the href attribute
identifies a resource equivalent to the containing element."). However,
Web Linking (RFC 5988) defines the IRI of the requested resource as
default Context IRI but allows to override it by an anchor parameter.
Am 08.10.2011 15:32, schrieb Jakob Strauch:
>
> As far as i understand link relations, they refer to the context of
> the resource URI. From [1]:
>
> A link can be viewed as a statement of the form "{context IRI} has a
> {relation type} resource at {target IRI}, which has {target
> attributes}."
>
> In your example, does "self" not refer to the resource�s context (the
> collection)?
>
> [1] http://tools.ietf.org/html/draft-nottingham-http-link-header-10#page-4
>
> --- In rest-discuss@yahoogroups.com
> <mailto:rest-discuss%40yahoogroups.com>, "bryan_w_taylor"
> <bryan_w_taylor@...> wrote:
> >
> >
> >
> > Just use "self" within the context of a child item:
> >
> > <things>
> > <thing id="1" name="thing1">
> > <atom:link rel="self" href="http://example.com/things/1"/>
> > </thing>
> > <thing id="2" name="thing2">
> > <atom:link rel="self" href="http://example.com/things/2"/>
> > </thing>
> > </things>
> >
> > --- In rest-discuss@yahoogroups.com
> <mailto:rest-discuss%40yahoogroups.com>, "Jakob Strauch"
> <jakob.strauch@> wrote:
> > >
> > > I just wondered why there is no "item" relation (ore something
> similar) registered at [1] with the semantics of "this is an item of
> this collection resource".
> >
>
>
Right, this is also a primary purpose of <resource> elements in hal
representations
On Mon, Oct 10, 2011 at 12:36 PM, Daniel "Oscar" Schulte <
mail@danieloscarschulte.de> wrote:
>
>
> If I remember correctly, the Context IRI is definined by the Media Type.
> Atom (RFC 4287) defines the containing element as context (" The value
> "self" signifies that the IRI in the value of the href attribute identifies
> a resource equivalent to the containing element."). However, Web Linking
> (RFC 5988) defines the IRI of the requested resource as default Context IRI
> but allows to override it by an anchor parameter.
>
> Am 08.10.2011 15:32, schrieb Jakob Strauch:
>
>
>
> As far as i understand link relations, they refer to the context of
> the resource URI. From [1]:
>
> A link can be viewed as a statement of the form "{context IRI} has a
> {relation type} resource at {target IRI}, which has {target
> attributes}."
>
> In your example, does "self" not refer to the resource´s context (the
> collection)?
>
> [1] http://tools.ietf.org/html/draft-nottingham-http-link-header-10#page-4
>
> --- In rest-discuss@yahoogroups.com, "bryan_w_taylor" <bryan_w_taylor@...><bryan_w_taylor@...>wrote:
> >
> >
> >
> > Just use "self" within the context of a child item:
> >
> > <things>
> > <thing id="1" name="thing1">
> > <atom:link rel="self" href="http://example.com/things/1"/>
> > </thing>
> > <thing id="2" name="thing2">
> > <atom:link rel="self" href="http://example.com/things/2"/>
> > </thing>
> > </things>
> >
> > --- In rest-discuss@yahoogroups.com, "Jakob Strauch" <jakob.strauch@>
> wrote:
> > >
> > > I just wondered why there is no "item" relation (ore something similar)
> registered at [1] with the semantics of "this is an item of this collection
> resource".
> >
>
>
>
>
>
Hi,
how do I decide whether a URI is bookmarkable or not?
('Bookmarkable' meaning: 'Being an entry point into an application that is worth remembering')
Some things to consider:
There is a difference between the stability of a URI (whether a client can asssume a URI will
be dereferencable in the future) and the suitability of a URI to act as an application entry point.
For example, I'd assume HTML style sheet URIs to be pretty stable but they are not useful
application entry points.
Should a user agent remember as many URIs as possible, thereby increasing the amount of
known application entry points and possibly avoiding re-doing certai steps through the
application in the future (sth we do all the time when bookmarking e.g. page 4 of
a search result).
o All URIs I find in responses from a server in a link context are bookmarkable
('Link context' meaning Atom <link> elemnts, HTML <a> elements, Link headers,
HTML GET-forms, etc.)
o Not all URIs I find in responses from a server are bookmarkable. For example,
- a URI I find in an HTML <form> element with action 'POST' is not
- a URI I find in an Opensearch <Url> element is not
- a URI I find in an HTML <style> element is not
o What about
- a URI I find in an HTML <img> element
- a URI I find in an AtomPub <collection> element
- a URI I find in HTTP headers such as Location, Content-Location, Alternates
- AtomPub's edit-media links?
- Atom <content src=""> references
Does the cachability of a response affect these issues?
In general, I am trying to answer the question:
What are the indicators in media type (and link relation) specifications that tell
the user agent implementor what URIs in responses of the media type in question
can be considered bookmarkable?
JAn
Hi Jan,
I must say I've thought about the same thing many times.
I feel that there are two aspects of this question: 1) is it bookmarkable
and 2) is it a suitable entry point.
My current opinion is that any identifier which is available when the
user-agent is in a stable state may be bookmarked by the user-agent. This
may be an identifier of an embedded <IMG> image or an <a> link in a HTML
document, for example.
Whether a bookmark is a suitable entry point is a more subtle thing and IMO
depends on the application (i.e. depends on the agent using the bookmark).
In most cases, you will want to bookmark links that return a hypermedia
document which can then lead you onwards using links. However, I can imagine
an application which only has the goal of fetching a bookmarked image and
therefore does not need to go onwards from there i.e. doesn't need any links
in the returned representation. Therefore, in one case, an identifier of an
HTML document is needed (or similar), while in the other case, an identifier
of a PNG image is needed (however, a parent HTML document identifier
containing a link to the PNG would also be ok, but I wouldn't say it is
mandatory). So as a general guideline, I'd bookmark identifiers of resources
returning hypermedia documents because they can be used to navigate to other
resources. However, some applications may be satisfied with
bookmarking identifiers
of resources returning non-hypermedia documents, which is also allowed. A
consequence of this is that I would probably not bookmark a resource
identifier if I have not successfully fetched it's content (either by
navigation or by embedding) and examined it to see if it fits the above
requirements (e.g. I'm not bookmarking an <a> link and expecting it will
return a hypermedia document because it may return anything).
If any of the bookmarked identifiers becomes invalid at some point in time,
the server should return a response which can guide clients onwards, e.g.
include a link to a "home" resource or a resource related to the previously
requested resource.
Best,
Ivan
On Tue, Oct 11, 2011 at 09:40, Jan Algermissen
<jan.algermissen@...>wrote:
> **
>
>
> Hi,
>
> how do I decide whether a URI is bookmarkable or not?
>
> ('Bookmarkable' meaning: 'Being an entry point into an application that is
> worth remembering')
>
> Some things to consider:
>
> There is a difference between the stability of a URI (whether a client can
> asssume a URI will
> be dereferencable in the future) and the suitability of a URI to act as an
> application entry point.
> For example, I'd assume HTML style sheet URIs to be pretty stable but they
> are not useful
> application entry points.
>
> Should a user agent remember as many URIs as possible, thereby increasing
> the amount of
> known application entry points and possibly avoiding re-doing certai steps
> through the
> application in the future (sth we do all the time when bookmarking e.g.
> page 4 of
> a search result).
>
> o All URIs I find in responses from a server in a link context are
> bookmarkable
> ('Link context' meaning Atom <link> elemnts, HTML <a> elements, Link
> headers,
> HTML GET-forms, etc.)
>
> o Not all URIs I find in responses from a server are bookmarkable. For
> example,
>
> - a URI I find in an HTML <form> element with action 'POST' is not
> - a URI I find in an Opensearch <Url> element is not
> - a URI I find in an HTML <style> element is not
>
> o What about
>
> - a URI I find in an HTML <img> element
> - a URI I find in an AtomPub <collection> element
> - a URI I find in HTTP headers such as Location, Content-Location,
> Alternates
> - AtomPub's edit-media links?
> - Atom <content src=""> references
>
> Does the cachability of a response affect these issues?
>
> In general, I am trying to answer the question:
>
> What are the indicators in media type (and link relation) specifications
> that tell
> the user agent implementor what URIs in responses of the media type in
> question
> can be considered bookmarkable?
>
> JAn
>
>
>
On Tue, Oct 11, 2011 at 8:40 AM, Jan Algermissen
<jan.algermissen@...> wrote:
> Hi,
>
>
> how do I decide whether a URI is bookmarkable or not?
>
> ('Bookmarkable' meaning: 'Being an entry point into an application that is worth remembering')
A given application should specify its entry points explicitly
>
> Some things to consider:
>
> There is a difference between the stability of a URI (whether a client can asssume a URI will
> be dereferencable in the future) and the suitability of a URI to act as an application entry point.
> For example, I'd assume HTML style sheet URIs to be pretty stable but they are not useful
> application entry points.
>
> Should a user agent remember as many URIs as possible, thereby increasing the amount of
> known application entry points and possibly avoiding re-doing certai steps through the
> application in the future (sth we do all the time when bookmarking e.g. page 4 of
> a search result).
No, this is not necessary - caching already addresses this challenge.
I'm not sure bookmarking a search page is a good example. I wouldn't
do it personally but if I did, I imagine the potential for
discrepancies on revisiting the bookmark to be quite high.. as a human
I can adjust and handle that ok but not so much if I'm a piece of
software. The rules can't be as liberal when the end user is not
human, so there's significant limitations in trying to draw lessons
from html for machine-oriented apps.
>
> Does the cachability of a response affect these issues?
>
No, they are two separate concerns
>
> In general, I am trying to answer the question:
>
> What are the indicators in media type (and link relation) specifications that tell
> the user agent implementor what URIs in responses of the media type in question
> can be considered bookmarkable?
>
The bits of a given spec that are explicit in specifying what
constitutes a valid entry point.
The question seems to assume that all media types (and their user
agents) exist for a specific application, which is not always the case
(e.g. HTML). In cases where the media type is generic, it isn't
possible to establish entry points up front - which is why browsers
allow users to bookmark whatever URL they want.
Cheers,
Mike
On Oct 11, 2011, at 11:23 AM, Mike Kelly wrote: > A given application should specify its entry points explicitly This cannot work because the application only comes into existence based on the choices the client makes. The server does not know what applications it will be a part of. Jan
On Tue, Oct 11, 2011 at 10:42 AM, Jan Algermissen <jan.algermissen@...> wrote: > > On Oct 11, 2011, at 11:23 AM, Mike Kelly wrote: > >> A given application should specify its entry points explicitly > > This cannot work because the application only comes into existence based on the choices the client makes. Do you mean application state? Either way, not sure I understand your point; AtomPub is an application, and it exists. > > The server does not know what applications it will be a part of. > Agreed, but why does that matter here? This is an issue for clients not servers Cheers, Mike
On Oct 11, 2011, at 12:07 PM, Mike Kelly wrote: > On Tue, Oct 11, 2011 at 10:42 AM, Jan Algermissen > <jan.algermissen@...> wrote: >> >> On Oct 11, 2011, at 11:23 AM, Mike Kelly wrote: >> >>> A given application should specify its entry points explicitly >> >> This cannot work because the application only comes into existence based on the choices the client makes. > > Do you mean application state? > > Either way, not sure I understand your point; AtomPub is an > application, and it exists. No, AtomPub is a media type that enables a certain set of applications. The AtomPub spec suggests a set of canonical[1] applications. But there are certainly many, many others. For example, crawling a dozen of feeds and building an index of posts is one that is not in the AtomPub spec. The application is defined by what the user agent(s) do. Not by what a media type spec says. Jan [1] 'canonical application' has IIRC been coined by Jim Webber at restunconf in January 2011. > >> >> The server does not know what applications it will be a part of. >> > > Agreed, but why does that matter here? This is an issue for clients not servers > > Cheers, > Mike
On Tue, Oct 11, 2011 at 11:18 AM, Jan Algermissen <jan.algermissen@nordsc.com> wrote: > > On Oct 11, 2011, at 12:07 PM, Mike Kelly wrote: > >> On Tue, Oct 11, 2011 at 10:42 AM, Jan Algermissen >> <jan.algermissen@...> wrote: >>> >>> On Oct 11, 2011, at 11:23 AM, Mike Kelly wrote: >>> >>>> A given application should specify its entry points explicitly >>> >>> This cannot work because the application only comes into existence based on the choices the client makes. >> >> Do you mean application state? >> >> Either way, not sure I understand your point; AtomPub is an >> application, and it exists. > > No, AtomPub is a media type that enables a certain set of applications. > Right, this is actually one of my issues with Atom because, in my opinion, there should be complete separation between the media type and the 'application' i.e. hal (media type) and a set of link relations (the application). > >The AtomPub spec suggests a set of canonical[1] applications. But there are certainly many, many others. For example, crawling a dozen of feeds and building an index of posts is one that is not in the AtomPub spec. > That's just a client performing multiple "applications" (I believe the Jian™ terminology for this is Domain Application Protocol™) in a particular sequence. Yes, there's some unique application state stuff involved on the client side but that's not related to the AtomPub application/DAP I was referring to originally and, more to the point, isn't actually visible to the network. If you create a user agent that generates and manages some application state of its own that's great but I don't see what that has to do with bookmark-ability of the resources or the Domain Application Protocols™ it's engaging in. The DAPs are what is informing your user-agent of the significance of given resources, and that is where (if at all) any potential entry points should be specified. Cheers, Mike
On Tue, Oct 11, 2011 at 9:40 AM, Jan Algermissen <jan.algermissen@... > wrote: > ** > > [...] In general, I am trying to answer the question: > > What are the indicators in media type (and link relation) specifications > that tell > the user agent implementor what URIs in responses of the media type in > question > can be considered bookmarkable? > > Isn't it the prose in the media type specification itself? A media type indicates that activating a particular hypermedia control advances the user agent to a new application state. If that ends up in a "safe" state (i.e. it ends up with a GET'able resource) then that means that it's a "bookmarkable state" and one that's worthy of a future entry point. Example: if a form directs you to POST something to a URI and it responds with a redirect to another resource, and the agent automatically follows that cue, then that new resource would IMHO be a bookmarkable state. HTML states this clearly for <A>: "By activating these links [...], users may visit these resources." [1]. For <IMG> images are to be "embedded [...] in the current document" [2]. [1]: http://www.w3.org/TR/html401/struct/links.html#h-12.1.1 [2]: http://www.w3.org/TR/html401/struct/objects.html#h-13.2 -- -mogsie-
Hi all,
I'm at the URI design stage of my API and wanted to check whether this is RESTful or not.
For example, if we have:
/api/v1/customers - List of customers
/api/v1/customers/{customer}
/api/v1/billing - List of all bills
What I was planning on doing is if a customer authenticates they only see a list with one item and link to their own account. If a reseller authenticates they see all their customers, if we authenticate we see all customers. But a bookmark would not be the same for each.
I guess this is somewhat covered in http://tech.groups.yahoo.com/group/rest-discuss/message/17714 but wanted to check if this is bad.
Lastly, what do you advise here:
/api/v1/customers/{customer}/billing/invoices
or
/api/v1/billing/{customer}/invoices - List of all bills
or both?
We have partners/resellers too which I can't decide on:
/api/v1/resellers/customers/{customer}/billing/invoices
or does /customers change state depending on who is auth'd like I've asked above? What have you done for customers/billing or layouts with different authorization roles?
I was looking at http://wiki.alfresco.com/wiki/Repository_RESTful_API_Reference
Thanks,
Gavin.
For the second part of your question, I suggest using GUIDs for each of your identifiers. In other words, from a ReSTful perspective it doesn't matter in the slightest to the ReST part of that equation, identifiers are opaque and there is no relationship between the various components of a URI.
________________________________________
From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of gchenry22 [gavin.henry@...]
Sent: 18 October 2011 10:12
To: rest-discuss@yahoogroups.com
Subject: [rest-discuss] Resource state that varies depending on who is authorized and URI design
Hi all,
I'm at the URI design stage of my API and wanted to check whether this is RESTful or not.
For example, if we have:
/api/v1/customers - List of customers
/api/v1/customers/{customer}
/api/v1/billing - List of all bills
What I was planning on doing is if a customer authenticates they only see a list with one item and link to their own account. If a reseller authenticates they see all their customers, if we authenticate we see all customers. But a bookmark would not be the same for each.
I guess this is somewhat covered in http://tech.groups.yahoo.com/group/rest-discuss/message/17714 but wanted to check if this is bad.
Lastly, what do you advise here:
/api/v1/customers/{customer}/billing/invoices
or
/api/v1/billing/{customer}/invoices - List of all bills
or both?
We have partners/resellers too which I can't decide on:
/api/v1/resellers/customers/{customer}/billing/invoices
or does /customers change state depending on who is auth'd like I've asked above? What have you done for customers/billing or layouts with different authorization roles?
I was looking at http://wiki.alfresco.com/wiki/Repository_RESTful_API_Reference
Thanks,
Gavin.
------------------------------------
Yahoo! Groups Links
Based on conversations over at InfoQ, ... are there standards for link representations? Its common to use link-rel in a XML/HTML/ATOM vocabulary, but is that a standard? Does expressing it differently invalidate the RESTfulness of an service? To put it differently can one have a media type e.g. application/vnd.mycompany.po+xml that represents links by convention just like in the Amazon S3 Api. Or for that matter how is it represented in a json response? Regards, Dilip Krishnan dilip.krishnan@...
On 18 October 2011 14:11, Dilip Krishnan <dilip.krishnan@...> wrote:
> **
>
>
> Based on conversations over at InfoQ<http://www.infoq.com/news/2011/10/nosql-rest#view_76929>,
> ... are there standards for link representations? Its common to use link-rel
> in a XML/HTML/ATOM vocabulary, but is that a standard? Does expressing it
> differently invalidate the RESTfulness of an service?
>
Interested to know about this too. For my
purposes<http://restfulobjects.org>I've defined a JSON representation
of a link as:
{
"rel": "xxx",
"href": "http://~/objects/ORD-123",
"type":
"application/json;profile=\"urn:org.restfulobjects/domainobject\"",
"method": "GET",
"title": "xxx",
"arguments": { ... },
"value": { ... }
}
where:
* rel - is a URN indicating the nature of the relationship of the related
resource to the resource that generated this representation.
* href - is the hyperlink to the related resource. Any characters that are
invalid in URLs must be URL encoded.
* type - is the media type that the representation obtained by following
thelink will return.
* method - is the HTTP method to use to traverse the link (GET, POST, PUT or
DELETE)
* title (optional) - is a string that the consuming application may use to
render the link without necessarily traversing the link in advance
* arguments (optional) - is data to use to follow the link (eg the body)
* value (optional) - is the value that results from the link having already
been loaded (support eager loading of links)
>
> To put it differently can one have a media type e.g.
> application/vnd.mycompany.po+xml that represents links by convention just
> like in the Amazon S3<http://docs.amazonwebservices.com/AmazonS3/latest/API/> Api.
>
>
you could, but it implies the client has quite a lot of (out-of-band)
knowledge.
Dan
>
> Regards,
> Dilip Krishnan
> dilip.krishnan@...
>
>
>
>
>
> For my purposes I've defined a JSON representation of a link as:
>
> {
> "rel": "xxx",
> "href": "http://~/objects/ORD-123",
> "type": "application/json;profile=\"urn:org.restfulobjects/domainobject\"",
> "method": "GET",
> "title": "xxx",
> "arguments": { ... },
> "value": { ... }
> }
>
>
I would argue that representing links like this in JSON is also out of band knowledge albiet based on the html semantics.
Dilip: While there are no published standards (i.e. RFCs, etc.) on "representing links" that span across all data formats, there are a number of sol,id examples to use a guides. - Subbu Allamaraju has basic link representation recipes for XML and JSON in his "RESTful Web Services Cookbook" [1] - I have examples of both simple links and paramterized "forms" for JSON in my Collection+JSON registered media type design [2] - Of course, HTML, Atom, and VoiceXML are all sample hypermedia-oriented media types that have examples of how to represent links and forms. More to the point, if you are designing a message format that will contain hypermedia information in order to support Fielding's "hypermedia (as the engine of application state)" constraint, you'll need to determine which hypermedia elements you need in your messages: - LO (links for outbound actions, navigational links, such as HTML.A or HTML.LINK) - LE (links for embedded or transclusion actions such as HTML.IMG or HTML.IFRAME) - LT (link templates such as HTML.FORM@method="get") - LN (links that support non-idepempotent actions such as HTML.FORM@method ="post") - LI (links that support idempotent actions such as ATOM.LINK@rel="edit") I've collected examples of these affordances in my H-Factors page[3] Finally, once you decide on the set of hypermedia controls your design will support, you need to document them as a collection including how they are represented in an outbound document and how client applications should recognize, parse, and render/activate them when they appear in a response representation. That is what a "media type definition" is about. Armed with the media type definition both client and server have sufficient "out-of-band" shared knowledge to exchange hypermedia messages and use the format to representation domain-specific information in order to support an "application" experience. hope this helps. [1] http://my.safaribooksonline.com/book/web-development/web-services/9780596809140/web-linking/chapter-web-linking [2] http://amundsen.com/media-types/collection/ [3] http://amundsen.com/hypermedia/hfactor/ mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Tue, Oct 18, 2011 at 09:11, Dilip Krishnan <dilip.krishnan@...>wrote: > > > Based on conversations over at InfoQ<http://www.infoq.com/news/2011/10/nosql-rest#view_76929>, > ... are there standards for link representations? Its common to use link-rel > in a XML/HTML/ATOM vocabulary, but is that a standard? Does expressing it > differently invalidate the RESTfulness of an service? > > To put it differently can one have a media type e.g. > application/vnd.mycompany.po+xml that represents links by convention just > like in the Amazon S3<http://docs.amazonwebservices.com/AmazonS3/latest/API/> Api. > Or for that matter how is it represented in a json response? > > Regards, > Dilip Krishnan > dilip.krishnan@... > > > > > >
Hi Gavin,
On Oct 18, 2011, at 11:12 AM, gchenry22 wrote:
> Hi all,
>
> I'm at the URI design stage of my API and wanted to check whether this is RESTful or not.
>
> For example, if we have:
>
> /api/v1/customers - List of customers
> /api/v1/customers/{customer}
Remove the version id from the path - REST deals with versioning issues through content negotiation. E.g see <http://stackoverflow.com/questions/7619645/how-to-implement-backend-of-api-with-multiple-versions/7620193#7620193>
>
> /api/v1/billing - List of all bills
>
> What I was planning on doing is if a customer authenticates they only see a list with one item and link to their own account. If a reseller authenticates they see all their customers, if we authenticate we see all customers. But a bookmark would not be the same for each.
Such variations are ok, but I'd probably give the various concepts their own resources and URIs. You can still use a negotiated resource as a common entry point.
Jan
>
> I guess this is somewhat covered in http://tech.groups.yahoo.com/group/rest-discuss/message/17714 but wanted to check if this is bad.
>
> Lastly, what do you advise here:
>
> /api/v1/customers/{customer}/billing/invoices
>
> or
>
> /api/v1/billing/{customer}/invoices - List of all bills
>
> or both?
>
> We have partners/resellers too which I can't decide on:
>
> /api/v1/resellers/customers/{customer}/billing/invoices
>
> or does /customers change state depending on who is auth'd like I've asked above? What have you done for customers/billing or layouts with different authorization roles?
>
> I was looking at http://wiki.alfresco.com/wiki/Repository_RESTful_API_Reference
>
> Thanks,
>
> Gavin.
>
>
Hi, If you are and are interested, can you please email me? Thanks.
Many of us have. What ought we be interested in? :) ________________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of gchenry22 [gavin.henry@...] Sent: 18 October 2011 16:07 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Consultants that have exposed an existing data set before RESTfully? Hi, If you are and are interested, can you please email me? Thanks. ------------------------------------ Yahoo! Groups Links
hello.
On 2011-10-18 07:17 , mike amundsen wrote:
> While there are no published standards (i.e. RFCs, etc.) on
> "representing links" that span across all data formats, there are a
> number of solid examples to use a guides.
> - Subbu Allamaraju has basic link representation recipes for XML and
> JSON in his "RESTful Web Services Cookbook" [1]
> - I have examples of both simple links and paramterized "forms" for JSON
> in my Collection+JSON registered media type design [2]
> - Of course, HTML, Atom, and VoiceXML are all sample hypermedia-oriented
> media types that have examples of how to represent links and forms.
i'd like to add that apart from the link semantics baked into some of
those media types (HTML, Atom, VoiceXML, ...), there also are standards
such as XInclude and XLink which try to serve as generic building blocks
for representing link semantics. however, XInclude is very limited in
its scope (just doing inclusion), and XLink is not a huge success
because it has been built with visual clients in mind and not so much
based on the idea of general REST hypermedia. i am still hoping that at
some point in time, there will be some "better XLink" that hopefully
would be abstract enough to serve as a starting point for representing
links in XML as well as in JSON, but that's just me hoping and if that
ever happens, it is not going to be anytime soon. there is certainly
room for improvement in this area.
cheers,
dret.
--
erik wilde | mailto:dret@... - tel:+1-510-2061079 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
Meant to send it to the group as well... > Mike, thx! As always very detailed response :) > > I hadn't seen your collections+json example along with the "H-Factors". I really like the classification! I use the recipes for link representations from Subbu's book; so my question is NOT how one can/should represent links in a standard xml media type. > > Specifically my question is that if a "RESTful" service uses an un-conventional link representation in its hypermedia like in Amazon S3 (i.e. doesnt use link rel; rather it uses a convention thats based on a different shared understanding between s3 clients and the service; its a fairly straight forward convention) does it invalidate the RESTfulness of a service? I wouldnt think so... > > Regards, > Dilip Krishnan > dilip.krishnan@... > > > > On Oct 18, 2011, at 9:17 AM, mike amundsen wrote: > >> Dilip: >> >> While there are no published standards (i.e. RFCs, etc.) on "representing links" that span across all data formats, there are a number of sol,id examples to use a guides. >> - Subbu Allamaraju has basic link representation recipes for XML and JSON in his "RESTful Web Services Cookbook" [1] >> - I have examples of both simple links and paramterized "forms" for JSON in my Collection+JSON registered media type design [2] >> - Of course, HTML, Atom, and VoiceXML are all sample hypermedia-oriented media types that have examples of how to represent links and forms. >> >> More to the point, if you are designing a message format that will contain hypermedia information in order to support Fielding's "hypermedia (as the engine of application state)" constraint, you'll need to determine which hypermedia elements you need in your messages: >> - LO (links for outbound actions, navigational links, such as HTML.A or HTML.LINK) >> - LE (links for embedded or transclusion actions such as HTML.IMG or HTML.IFRAME) >> - LT (link templates such as HTML.FORM@method="get") >> - LN (links that support non-idepempotent actions such as HTML.FORM@method="post") >> - LI (links that support idempotent actions such as ATOM.LINK@rel="edit") >> >> I've collected examples of these affordances in my H-Factors page[3] >> >> Finally, once you decide on the set of hypermedia controls your design will support, you need to document them as a collection including how they are represented in an outbound document and how client applications should recognize, parse, and render/activate them when they appear in a response representation. That is what a "media type definition" is about. >> >> Armed with the media type definition both client and server have sufficient "out-of-band" shared knowledge to exchange hypermedia messages and use the format to representation domain-specific information in order to support an "application" experience. >> >> hope this helps. >> >> [1] http://my.safaribooksonline.com/book/web-development/web-services/9780596809140/web-linking/chapter-web-linking >> [2] http://amundsen.com/media-types/collection/ >> [3] http://amundsen.com/hypermedia/hfactor/ >> >> mca >> http://amundsen.com/blog/ >> http://twitter.com@mamund >> http://mamund.com/foaf.rdf#me >> >> >> >> >> On Tue, Oct 18, 2011 at 09:11, Dilip Krishnan <dilip.krishnan@...> wrote: >> >> >> Based on conversations over at InfoQ, ... are there standards for link representations? Its common to use link-rel in a XML/HTML/ATOM vocabulary, but is that a standard? Does expressing it differently invalidate the RESTfulness of an service? >> >> To put it differently can one have a media type e.g. application/vnd.mycompany.po+xml that represents links by convention just like in the Amazon S3 Api. Or for that matter how is it represented in a json response? >> >> Regards, >> Dilip Krishnan >> dilip.krishnan@... >> >> >> >> >> >> >> >
<snip> Specifically my question is that if a "RESTful" service uses an un-conventional link representation in its hypermedia like in Amazon S3<http://docs.amazonwebservices.com/AmazonS3/latest/API/> (i.e. doesnt use link rel; rather it uses a convention thats based on a *different * shared understanding between s3 clients and the service; its a fairly straight forward convention) does it invalidate the RESTfulness of a service? I wouldnt think so... </snip> Well, to start: 1) I am not aware of a media type definition for S3. 2) If there _is_ one (or if S3 actually uses some other media type for representing responses [i.e. Atom, etc.]), I'd like to see that documentation, specifically the part which identifies the hypermedia controls that can appear within representation responses and the definition details of each of these hypermedia controls (mapping protocol of protocol details, mapping of domain-specific information, etc.) Then I would be able to identify the "shared understanding" based on hypermedia. Of course, "hypermedia" is not the only way to generate "shared understanding" between client and server (RPC, OO, URI-construction, etc.). mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me ---------- Forwarded message ---------- From: Dilip Krishnan <dilip.krishnan@...> Date: Tue, Oct 18, 2011 at 10:46 Subject: Re: [rest-discuss] Link Reprentation In Hypermedia Systems... To: mike amundsen <mamund@...> Mike, thx! As always very detailed response :) I hadn't seen your collections+json example along with the "H-Factors". I really like the classification! I use the recipes for link representations from Subbu's book; so my question is NOT how one can/should represent links in a standard xml media type. Specifically my question is that if a "RESTful" service uses an un-conventional link representation in its hypermedia like in Amazon S3<http://docs.amazonwebservices.com/AmazonS3/latest/API/> (i.e. doesnt use link rel; rather it uses a convention thats based on a *different * shared understanding between s3 clients and the service; its a fairly straight forward convention) does it invalidate the RESTfulness of a service? I wouldnt think so... Regards, Dilip Krishnan dilip.krishnan@... On Oct 18, 2011, at 9:17 AM, mike amundsen wrote: Dilip: While there are no published standards (i.e. RFCs, etc.) on "representing links" that span across all data formats, there are a number of sol,id examples to use a guides. - Subbu Allamaraju has basic link representation recipes for XML and JSON in his "RESTful Web Services Cookbook" [1] - I have examples of both simple links and paramterized "forms" for JSON in my Collection+JSON registered media type design [2] - Of course, HTML, Atom, and VoiceXML are all sample hypermedia-oriented media types that have examples of how to represent links and forms. More to the point, if you are designing a message format that will contain hypermedia information in order to support Fielding's "hypermedia (as the engine of application state)" constraint, you'll need to determine which hypermedia elements you need in your messages: - LO (links for outbound actions, navigational links, such as HTML.A or HTML.LINK) - LE (links for embedded or transclusion actions such as HTML.IMG or HTML.IFRAME) - LT (link templates such as HTML.FORM@method="get") - LN (links that support non-idepempotent actions such as HTML.FORM@method ="post") - LI (links that support idempotent actions such as ATOM.LINK@rel="edit") I've collected examples of these affordances in my H-Factors page[3] Finally, once you decide on the set of hypermedia controls your design will support, you need to document them as a collection including how they are represented in an outbound document and how client applications should recognize, parse, and render/activate them when they appear in a response representation. That is what a "media type definition" is about. Armed with the media type definition both client and server have sufficient "out-of-band" shared knowledge to exchange hypermedia messages and use the format to representation domain-specific information in order to support an "application" experience. hope this helps. [1] http://my.safaribooksonline.com/book/web-development/web-services/9780596809140/web-linking/chapter-web-linking [2] http://amundsen.com/media-types/collection/ [3] http://amundsen.com/hypermedia/hfactor/ mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Tue, Oct 18, 2011 at 09:11, Dilip Krishnan <dilip.krishnan@...>wrote: > > > Based on conversations over at InfoQ<http://www.infoq.com/news/2011/10/nosql-rest#view_76929>, > ... are there standards for link representations? Its common to use link-rel > in a XML/HTML/ATOM vocabulary, but is that a standard? Does expressing it > differently invalidate the RESTfulness of an service? > > To put it differently can one have a media type e.g. > application/vnd.mycompany.po+xml that represents links by convention just > like in the Amazon S3<http://docs.amazonwebservices.com/AmazonS3/latest/API/> Api. > Or for that matter how is it represented in a json response? > > Regards, > Dilip Krishnan > dilip.krishnan@... > > > > > >
I see a lot of advantages to using XHTML as a media type of choice for my REST services. Speaking in terms of Mike Amundsen's H-Factors, it covers quite a few. I can even live without support for DELETE and PUT. However, I find that I want the link-relation stuff for things that aren't <a> or <link> elements. For example, when the state of the application will support creating a new resource, rather than a <link> element pointing to where I would POST the resource, I would rather use a <form> element to make it explicit what may be posted there. The @action attribute takes care of where to post it, but what would I use for the equivalent of @rel here to show the relationship from the current application state? Are there any conventions for this? Or is what I'm doing unusual or even unwise?
You could use an "id" attribute on the <form> element (that's what I do). Philippe Le 18 oct. 2011 à 21:22, jason_h_erickson a écrit : > I see a lot of advantages to using XHTML as a media type of choice for my REST services. Speaking in terms of Mike Amundsen's H-Factors, it covers quite a few. I can even live without support for DELETE and PUT. However, I find that I want the link-relation stuff for things that aren't <a> or <link> elements. > > For example, when the state of the application will support creating a new resource, rather than a <link> element pointing to where I would POST the resource, I would rather use a <form> element to make it explicit what may be posted there. The @action attribute takes care of where to post it, but what would I use for the equivalent of @rel here to show the relationship from the current application state? > > Are there any conventions for this? Or is what I'm doing unusual or even unwise?
On Oct 18, 2011, at 9:22 PM, jason_h_erickson wrote: > I see a lot of advantages to using XHTML as a media type of choice for my REST services. Speaking in terms of Mike Amundsen's H-Factors, it covers quite a few. I can even live without support for DELETE and PUT. However, I find that I want the link-relation stuff for things that aren't <a> or <link> elements. > > For example, when the state of the application will support creating a new resource, rather than a <link> element pointing to where I would POST the resource, I would rather use a <form> element to make it explicit what may be posted there. The @action attribute takes care of where to post it, but what would I use for the equivalent of @rel here to show the relationship from the current application state? > You could use an additional Link: header to say what the resource to post the form to 'is'. 200 Ok Link: </orders>; rel=order-processor <html> ... <form action="/orders"> ... </form> Jan > Are there any conventions for this? Or is what I'm doing unusual or even unwise? > >
here's how i've been doing it: when using [X]HTML i use the following attributes to identify domain-specific information: @id, @name, @rel, & @class @id - a single unique value in the representation (HTML.* - a global attribute) @name - a single non-unique value for elements used to compose a request body (HTML.INPUT, etc.) @rel - a space-separated collection of values for elements used to initiate a parameter-less state transition (HTML.A) and/or for identifying related content (HTML.LINK). @class - a space-separated collection of values for decorating any element with domain-specific data (HTML.* - a global attribute). i *used* to just slap @rel and @name on any attribute i needed (browsers don't complain much, really), but i am now leaning much more on @class since it can be applied to all elements. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Tue, Oct 18, 2011 at 15:22, jason_h_erickson <jason@jasonerickson.com> wrote: > I see a lot of advantages to using XHTML as a media type of choice for my REST services. Speaking in terms of Mike Amundsen's H-Factors, it covers quite a few. I can even live without support for DELETE and PUT. However, I find that I want the link-relation stuff for things that aren't <a> or <link> elements. > > For example, when the state of the application will support creating a new resource, rather than a <link> element pointing to where I would POST the resource, I would rather use a <form> element to make it explicit what may be posted there. The @action attribute takes care of where to post it, but what would I use for the equivalent of @rel here to show the relationship from the current application state? > > Are there any conventions for this? Or is what I'm doing unusual or even unwise? > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
I borrowed leaning heavily on @class from Microformats, and I've been using @name (redundantly with @class) in my form elements (which lets the form actually work if you submit it). @id doesn't seem like a good fit since you could conceivably have more than one resource with the same relationship (for example, a collection of forms, differing only by the @action, that you can submit to update a given resource). @class could work technically, but isn't it usual/best practice to make your rels as URI's and it isn't it odd to see a URI in a @class? I know it's allowed. On Oct 18, 2011, at 1:18 PM, mike amundsen wrote: > here's how i've been doing it: > > when using [X]HTML i use the following attributes to identify > domain-specific information: > @id, @name, @rel, & @class > > @id - a single unique value in the representation (HTML.* - a global attribute) > > @name - a single non-unique value for elements used to compose a > request body (HTML.INPUT, etc.) > > @rel - a space-separated collection of values for elements used to > initiate a parameter-less state transition (HTML.A) and/or for > identifying related content (HTML.LINK). > > @class - a space-separated collection of values for decorating any > element with domain-specific data (HTML.* - a global attribute). > > i *used* to just slap @rel and @name on any attribute i needed > (browsers don't complain much, really), but i am now leaning much more > on @class since it can be applied to all elements. > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > > > > On Tue, Oct 18, 2011 at 15:22, jason_h_erickson <jason@...> wrote: >> I see a lot of advantages to using XHTML as a media type of choice for my REST services. Speaking in terms of Mike Amundsen's H-Factors, it covers quite a few. I can even live without support for DELETE and PUT. However, I find that I want the link-relation stuff for things that aren't <a> or <link> elements. >> >> For example, when the state of the application will support creating a new resource, rather than a <link> element pointing to where I would POST the resource, I would rather use a <form> element to make it explicit what may be posted there. The @action attribute takes care of where to post it, but what would I use for the equivalent of @rel here to show the relationship from the current application state? >> >> Are there any conventions for this? Or is what I'm doing unusual or even unwise? >> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >>
Thanks, I like this idea. Do you actually do this or know of any systems that do this or is this just a clever idea you had? On Oct 18, 2011, at 1:14 PM, Jan Algermissen wrote: > > On Oct 18, 2011, at 9:22 PM, jason_h_erickson wrote: > >> I see a lot of advantages to using XHTML as a media type of choice for my REST services. Speaking in terms of Mike Amundsen's H-Factors, it covers quite a few. I can even live without support for DELETE and PUT. However, I find that I want the link-relation stuff for things that aren't <a> or <link> elements. >> >> For example, when the state of the application will support creating a new resource, rather than a <link> element pointing to where I would POST the resource, I would rather use a <form> element to make it explicit what may be posted there. The @action attribute takes care of where to post it, but what would I use for the equivalent of @rel here to show the relationship from the current application state? >> > > You could use an additional Link: header to say what the resource to post the form to 'is'. > > 200 Ok > Link: </orders>; rel=order-processor > > <html> > ... > <form action="/orders"> ... </form> > > Jan > > > >> Are there any conventions for this? Or is what I'm doing unusual or even unwise? > > > > >> >> >
On Oct 18, 2011, at 10:32 PM, Jason Erickson wrote: > Thanks, I like this idea. Do you actually do this or know of any systems that do this No. > or is this just a clever idea you had? Just an idea. But I would use it if I had the requirement to use XHTML. Alternatively, BTW, what about using sth like this: <foo:orders action="" method="POST"> </foo:orders> and use JS for normal browsers to turn <foo:orders> into <form>? You machine client could just look for <foo:orders> because it won't execute the JS. JAn > > On Oct 18, 2011, at 1:14 PM, Jan Algermissen wrote: > >> >> On Oct 18, 2011, at 9:22 PM, jason_h_erickson wrote: >> >>> I see a lot of advantages to using XHTML as a media type of choice for my REST services. Speaking in terms of Mike Amundsen's H-Factors, it covers quite a few. I can even live without support for DELETE and PUT. However, I find that I want the link-relation stuff for things that aren't <a> or <link> elements. >>> >>> For example, when the state of the application will support creating a new resource, rather than a <link> element pointing to where I would POST the resource, I would rather use a <form> element to make it explicit what may be posted there. The @action attribute takes care of where to post it, but what would I use for the equivalent of @rel here to show the relationship from the current application state? >>> >> >> You could use an additional Link: header to say what the resource to post the form to 'is'. >> >> 200 Ok >> Link: </orders>; rel=order-processor >> >> <html> >> ... >> <form action="/orders"> ... </form> >> >> Jan >> >> >> >>> Are there any conventions for this? Or is what I'm doing unusual or even unwise? >> >> >> >> >>> >>> >> > > > > ------------------------------------ > > Yahoo! Groups Links > > >
I think a good media type for representing links in XML and JSON is HAL [1]. It is not (yet) standardized afaik. But it´s a lean, well-defined media type for describing hypermedia resources. There is also a active discussion group on that [2]. [1] http://stateless.co/hal_specification.html [2] http://groups.google.com/group/hal-discuss --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > <snip> > Specifically my question is that if a "RESTful" service uses an > un-conventional link representation in its hypermedia like in Amazon > S3<http://docs.amazonwebservices.com/AmazonS3/latest/API/> (i.e. > doesnt use link rel; rather it uses a convention thats based on a *different > * shared understanding between s3 clients and the service; its a fairly > straight forward convention) does it invalidate the RESTfulness of a > service? I wouldnt think so... > </snip> > > Well, to start: > 1) I am not aware of a media type definition for S3. > 2) If there _is_ one (or if S3 actually uses some other media type for > representing responses [i.e. Atom, etc.]), I'd like to see that > documentation, specifically the part which identifies the hypermedia > controls that can appear within representation responses and > the definition details of each of these hypermedia controls (mapping > protocol of protocol details, mapping of domain-specific information, etc.) > > Then I would be able to identify the "shared understanding" based on > hypermedia. > > Of course, "hypermedia" is not the only way to generate "shared > understanding" between client and server (RPC, OO, URI-construction, etc.). > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > > > ---------- Forwarded message ---------- > From: Dilip Krishnan <dilip.krishnan@...> > Date: Tue, Oct 18, 2011 at 10:46 > Subject: Re: [rest-discuss] Link Reprentation In Hypermedia Systems... > To: mike amundsen <mamund@...> > > > Mike, thx! As always very detailed response :) > > I hadn't seen your collections+json example along with the "H-Factors". I > really like the classification! I use the recipes for link representations > from Subbu's book; so my question is NOT how one can/should represent links > in a standard xml media type. > > Specifically my question is that if a "RESTful" service uses an > un-conventional link representation in its hypermedia like in Amazon > S3<http://docs.amazonwebservices.com/AmazonS3/latest/API/> (i.e. > doesnt use link rel; rather it uses a convention thats based on a *different > * shared understanding between s3 clients and the service; its a fairly > straight forward convention) does it invalidate the RESTfulness of a > service? I wouldnt think so... > > Regards, > Dilip Krishnan > dilip.krishnan@... > > > > On Oct 18, 2011, at 9:17 AM, mike amundsen wrote: > > Dilip: > > While there are no published standards (i.e. RFCs, etc.) on "representing > links" that span across all data formats, there are a number of sol,id > examples to use a guides. > - Subbu Allamaraju has basic link representation recipes for XML and > JSON in his "RESTful Web Services Cookbook" [1] > - I have examples of both simple links and paramterized "forms" for JSON in > my Collection+JSON registered media type design [2] > - Of course, HTML, Atom, and VoiceXML are all sample hypermedia-oriented > media types that have examples of how to represent links and forms. > > More to the point, if you are designing a message format that will contain > hypermedia information in order to support Fielding's "hypermedia (as the > engine of application state)" constraint, you'll need to determine which > hypermedia elements you need in your messages: > - LO (links for outbound actions, navigational links, such as HTML.A or > HTML.LINK) > - LE (links for embedded or transclusion actions such as HTML.IMG or > HTML.IFRAME) > - LT (link templates such as HTML.FORM@method="get") > - LN (links that support non-idepempotent actions such as HTML.FORM@method > ="post") > - LI (links that support idempotent actions such as ATOM.LINK@rel="edit") > > I've collected examples of these affordances in my H-Factors page[3] > > Finally, once you decide on the set of hypermedia controls your design will > support, you need to document them as a collection including how they are > represented in an outbound document and how client applications should > recognize, parse, and render/activate them when they appear in a response > representation. That is what a "media type definition" is about. > > Armed with the media type definition both client and server have sufficient > "out-of-band" shared knowledge to exchange hypermedia messages and use the > format to representation domain-specific information in order to support an > "application" experience. > > hope this helps. > > [1] > http://my.safaribooksonline.com/book/web-development/web-services/9780596809140/web-linking/chapter-web-linking > [2] http://amundsen.com/media-types/collection/ > [3] http://amundsen.com/hypermedia/hfactor/ > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > > > On Tue, Oct 18, 2011 at 09:11, Dilip Krishnan <dilip.krishnan@...>wrote: > > > > > > > Based on conversations over at InfoQ<http://www.infoq.com/news/2011/10/nosql-rest#view_76929>, > > ... are there standards for link representations? Its common to use link-rel > > in a XML/HTML/ATOM vocabulary, but is that a standard? Does expressing it > > differently invalidate the RESTfulness of an service? > > > > To put it differently can one have a media type e.g. > > application/vnd.mycompany.po+xml that represents links by convention just > > like in the Amazon S3<http://docs.amazonwebservices.com/AmazonS3/latest/API/> Api. > > Or for that matter how is it represented in a json response? > > > > Regards, > > Dilip Krishnan > > dilip.krishnan@... > > > > > > > > > > > > >
I've done this before in HTML (and it follows the pattern of link to form and separating those two hypermedia controls): <a href="#formName" rel="http://serialseb.com/spec/order-processor">Order processor</a> And then simply have <form id="formName" />. This has two advantages: you don't hide away relationships in microformats that then need to be documented, you already leverage the html elements that exist, and you give yourself the possibility at some point in the future to spearate out that form as an independent resource (something I most generally do these days). in XHTML you could probably just import a role attribute from xlink or maybe simply a link element from ATOM and add that to the form's children. ________________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of jason_h_erickson [jason@...] Sent: 18 October 2011 20:22 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] @rel equivalent for non-link elements in XHTML I see a lot of advantages to using XHTML as a media type of choice for my REST services. Speaking in terms of Mike Amundsen's H-Factors, it covers quite a few. I can even live without support for DELETE and PUT. However, I find that I want the link-relation stuff for things that aren't <a> or <link> elements. For example, when the state of the application will support creating a new resource, rather than a <link> element pointing to where I would POST the resource, I would rather use a <form> element to make it explicit what may be posted there. The @action attribute takes care of where to post it, but what would I use for the equivalent of @rel here to show the relationship from the current application state? Are there any conventions for this? Or is what I'm doing unusual or even unwise? ------------------------------------ Yahoo! Groups Links
Paul: I've registered my Collection+JSON design[1] as a JSON-based hypermedia type with sport for all the Link H-Factors. There is at least one parser available for it, too[2]. Feel free to check it out and post your comments/suggestions/doubts on that design; I'd appreciate any and all feedback. I also use XHTML quite a bit for data-oriented representations; it's really just an XML format that already has several H-Factors *and* renders well in common browsers (great for "debugging" representations!). The design document is different (documenting @id, @name, @rel, & @class along w/ expected structures in the representation). I posted an example of using XHTML as a design source for hypermedia APIs, too[3]. [1] http://amundsen.com/media-types/collection/format/ [2] https://github.com/hamnis/json-collection [3] http://amundsen.com/hypermedia/profiles/ mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Wed, Oct 19, 2011 at 04:40, Paul Cohen <paco@...> wrote: > Hi, > > On Tue, Oct 18, 2011 at 4:17 PM, mike amundsen <mamund@...> wrote: >> I've collected examples of these affordances in my H-Factors page[3] >> Finally, once you decide on the set of hypermedia controls your design will support, you need to document them as a collection including how they are represented in an outbound document and how client applications should recognize, parse, and render/activate them when they appear in a response representation. That is what a "media type definition" is about. >> Armed with the media type definition both client and server have sufficient "out-of-band" shared knowledge to exchange hypermedia messages and use the format to representation domain-specific information in order to support an "application" experience. >> hope this helps. > > I liked the H-Factors classification. Thanks! > > So to summarize, the core issue is that we really need media types > that (in their specification) have full syntactical support for all > core (HTTP-based) hypermedia features (H-Factors). Currently it seems > XHTML is closest to achieving this. The problem with XHTML is that > it's meant to implement web pages and be consumed by web browsers. > It's not meant for hypermedia data representations. > > It seems a lot of people are inventing their own conventions for > embedding hypermedia information in JSON. Yes, me too! :-) > > It sure would be nice with a documented *general* JSON-based media > type that has support for all core hypermedia features (H-Factors). I > would like a convention that either: > > a) uses leading/trailing (single or double) underscore characters > for a set of JSON object/dictionary keys that handle hypermedia link > information. That would facilitate differentiating between actual JSON > application data and hypermedia link information and it would not > interfere with the name space of normal JSON object/dictionary key > names (without underscore characters). > > b) has one single reserved key, eg. "__link__" which always will > refer to a JSON object/dictionary containing hypermedia link > information, along the lines that Dilip Krishnan proposed earlier in > this thread. > > Another approach would to be invent a completely new hypermedia > enabled data format but would take a lot of time and energy. Also JSON > is very nice since it is simple, clean, ubiquitous and so easily > consumed by JavaScript-based web applications. > > /Paul > > -- > Paul Cohen > www.seibostudios.se > mobile: +46 730 787 035 > e-mail: paul.cohen@... >
Hi, On Tue, Oct 18, 2011 at 4:17 PM, mike amundsen <mamund@...> wrote: > I've collected examples of these affordances in my H-Factors page[3] > Finally, once you decide on the set of hypermedia controls your design will support, you need to document them as a collection including how they are represented in an outbound document and how client applications should recognize, parse, and render/activate them when they appear in a response representation. That is what a "media type definition" is about. > Armed with the media type definition both client and server have sufficient "out-of-band" shared knowledge to exchange hypermedia messages and use the format to representation domain-specific information in order to support an "application" experience. > hope this helps. I liked the H-Factors classification. Thanks! So to summarize, the core issue is that we really need media types that (in their specification) have full syntactical support for all core (HTTP-based) hypermedia features (H-Factors). Currently it seems XHTML is closest to achieving this. The problem with XHTML is that it's meant to implement web pages and be consumed by web browsers. It's not meant for hypermedia data representations. It seems a lot of people are inventing their own conventions for embedding hypermedia information in JSON. Yes, me too! :-) It sure would be nice with a documented *general* JSON-based media type that has support for all core hypermedia features (H-Factors). I would like a convention that either: a) uses leading/trailing (single or double) underscore characters for a set of JSON object/dictionary keys that handle hypermedia link information. That would facilitate differentiating between actual JSON application data and hypermedia link information and it would not interfere with the name space of normal JSON object/dictionary key names (without underscore characters). b) has one single reserved key, eg. "__link__" which always will refer to a JSON object/dictionary containing hypermedia link information, along the lines that Dilip Krishnan proposed earlier in this thread. Another approach would to be invent a completely new hypermedia enabled data format but would take a lot of time and energy. Also JSON is very nice since it is simple, clean, ubiquitous and so easily consumed by JavaScript-based web applications. /Paul -- Paul Cohen www.seibostudios.se mobile: +46 730 787 035 e-mail: paul.cohen@...
Ganesh Prasad posted a blog titled "Does Redis Undermine a Key REST Tenet?" <http://wisdomofganesh.blogspot.com/2011/09/does-redis-undermine-key-res\ t-tenet.html> , argued that the scalability of Redis can be used to relax the stateless constraint of REST. The blog was mentioned in infoQ under the category of NoSQL <http://www.infoq.com/news/2011/10/nosql-rest#view_76929> , where discussions started. Mike Amundsen joined that discussion, and raised five questions on Ganesh's blog. Some thoughts I have after reading all these: 1. Redis does help the scalability issue of session state, and its assumption is that SQL part is the bottleneck of session state, which is true for most web applications. However, Redis still needs to put the state into memory, and more nodes are required when deploying Redis in a system. I have not used Redis or tested it. But I still suspect it has a physical limit of the total memory size, or extra cost for allocating more memories. 2. The stateless constraint of REST needs efforts to be realized in designs, but it still benefits a system's scalability in a way that is different from what Redis does. That is, with stateless, a system's scale can be increase by the layered system constraint through load balancing. Please comment.
On Oct 19, 2011, at 6:18 PM, edonliu wrote: > The stateless constraint of REST needs efforts to be realized in designs, Which ones? IOW, what is a benefit of placing application state on servers in client-server based architectures? > but it still benefits a system's scalability in a way that is different from what Redis does. That is, with stateless, a system's scale can be increase by the layered system constraint through load balancing. Putting application state on the client not only induces scalability. Having a single, well defined location of application state also greatly simplifies the system. Distributing application state across objects is, for example, one of the things that make understanding the behavior of OO-based systems extremely complex. Jan
Thanks, Jan. On Wed, Oct 19, 2011 at 10:36 AM, Jan Algermissen <jan.algermissen@...> wrote: > > On Oct 19, 2011, at 6:18 PM, edonliu wrote: > >> The stateless constraint of REST needs efforts to be realized in designs, > > Which ones? IOW, what is a benefit of placing application state on servers in client-server based architectures? > I think it is just convenient for developers to use server session state for all the application state no matter it is a client state or a shared state. The efforts I mean include to distinguish the client state and shared state in the design phase and to design the representation to drive the application state. > >> but it still benefits a system's scalability in a way that is different from what Redis does. That is, with stateless, a system's scale can be increase by the layered system constraint through load balancing. > > Putting application state on the client not only induces scalability. Having a single, well defined location of application state also greatly simplifies the system. Distributing application state across objects is, for example, one of the things that make understanding the behavior of OO-based systems extremely complex. I agree. > > Jan > > Cheers, Dong
Great to see you fleshing this out. In terms of prior art, Darrel has done a bunch of work around hypermedia clients with his Rest Agent stuff. A few other thoughts 1. A hypermedia client is not coupled to the uri space of the application / does not contain hard coded logic for uris on the server. 2. A hypermedia client ignores hypermedia controls that it cannot handle / it does not break by the presence of new hypermedia controls. 3. A hypermedia client advances application state by choosing from available hypermedia controls in server responses. On Fri, Jun 10, 2011 at 1:44 PM, mike amundsen <mamund@...> wrote: > ** > > > ** cross-posted ** > > I'm contemplating a working definition for a "Hypermedia Client." Here are > my first attempts: > > 1 - "A Hypermedia Client supports advancing it's own application state > based on application control information supplied in server responses." > > 2 - "A Hypermedia Client supports advancing application state by sending > requests to servers based on application control information supplied in > server responses." > > 3 - "A Hypermedia Client supports advancing application state by sending > requests to servers based on application control information embedded > within, or as a layer above, the presentation of information supplied in > server responses." > > The germ of this definition is loosely based on a Fielding's description of > "Distributed Hypermedia"[1] > > The point of this exercise is: > 1) Is there a generally agreed definition? > 2) Can a definition be useful in evaluating/analyzing existing > implementations? (e.g. "Is 'this' a hypermedia client?") > 3) Can a definition be useful in creating new implementations that "meet" > the definition? (e.g. "Here is what you need to build a hypermedia > client....") > > Any/all feedback is welcome. Possibly there is "prior art" here of which I > am unaware; please point me to any reference material you may think useful. > Maybe you've gone through a similar process and would like to send along > your experiences. > > Thanks in advance. > > [1] > http://www.ics.uci.edu/~fielding/pubs/dissertation/web_arch_domain.htm#sec_4_1_3 > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > #RESTFest 2011 - Aug 18-20 > http://restfest.org > > >
I owuld add one exception to 1, which is the application root as there needs to be some known uri which is the entry point of the application otherwise you'll never access the application in the first place.... On Sat, Oct 22, 2011 at 8:08 PM, Glenn Block <glenn.block@...> wrote: > Great to see you fleshing this out. In terms of prior art, Darrel has done > a bunch of work around hypermedia clients with his Rest Agent stuff. > > A few other thoughts > > 1. A hypermedia client is not coupled to the uri space of the application / > does not contain hard coded logic for uris on the server. > > 2. A hypermedia client ignores hypermedia controls that it cannot handle / > it does not break by the presence of new hypermedia controls. > > 3. A hypermedia client advances application state by choosing from > available hypermedia controls in server responses. > > > > On Fri, Jun 10, 2011 at 1:44 PM, mike amundsen <mamund@...> wrote: > >> ** >> >> >> ** cross-posted ** >> >> I'm contemplating a working definition for a "Hypermedia Client." Here are >> my first attempts: >> >> 1 - "A Hypermedia Client supports advancing it's own application state >> based on application control information supplied in server responses." >> >> 2 - "A Hypermedia Client supports advancing application state by sending >> requests to servers based on application control information supplied in >> server responses." >> >> 3 - "A Hypermedia Client supports advancing application state by sending >> requests to servers based on application control information embedded >> within, or as a layer above, the presentation of information supplied in >> server responses." >> >> The germ of this definition is loosely based on a Fielding's description >> of "Distributed Hypermedia"[1] >> >> The point of this exercise is: >> 1) Is there a generally agreed definition? >> 2) Can a definition be useful in evaluating/analyzing existing >> implementations? (e.g. "Is 'this' a hypermedia client?") >> 3) Can a definition be useful in creating new implementations that "meet" >> the definition? (e.g. "Here is what you need to build a hypermedia >> client....") >> >> Any/all feedback is welcome. Possibly there is "prior art" here of which >> I am unaware; please point me to any reference material you may think >> useful. Maybe you've gone through a similar process and would like to send >> along your experiences. >> >> Thanks in advance. >> >> [1] >> http://www.ics.uci.edu/~fielding/pubs/dissertation/web_arch_domain.htm#sec_4_1_3 >> >> mca >> http://amundsen.com/blog/ >> http://twitter.com@mamund >> http://mamund.com/foaf.rdf#me >> >> #RESTFest 2011 - Aug 18-20 >> http://restfest.org >> >> >> > >
On Oct 23, 2011, at 5:12 AM, Glenn Block wrote:
> I owuld add one exception to 1, which is the application root as there needs to be some known uri which is the entry point of the application otherwise you'll never access the application in the first place....
No necessarily because the client could obtain that URI from a entry-point-by-service-type lookup (e.g. via DNS[1])
Nevertheless, I'd make clear (sort of as Glenn suggests) that clients typically know entry URIs (maybe even many) of applications. Furthermore, I'd make explicit that it is perfectly fine for clients to keep those entry URIs for as long as they wish[2] (aka bookmarking) and that this puts a responsibility on servers to maintain all entry URIs over time ("Cool URIs don't change..." [3]).
Jan
[1] http://www.infoq.com/articles/rest-discovery-dns
[2] Until they see a 40 Gone for it
[3] http://www.w3.org/Provider/Style/URI.html
>
> On Sat, Oct 22, 2011 at 8:08 PM, Glenn Block <glenn.block@...> wrote:
> Great to see you fleshing this out. In terms of prior art, Darrel has done a bunch of work around hypermedia clients with his Rest Agent stuff.
>
> A few other thoughts
>
> 1. A hypermedia client is not coupled to the uri space of the application / does not contain hard coded logic for uris on the server.
>
> 2. A hypermedia client ignores hypermedia controls that it cannot handle / it does not break by the presence of new hypermedia controls.
>
> 3. A hypermedia client advances application state by choosing from available hypermedia controls in server responses.
>
>
>
> On Fri, Jun 10, 2011 at 1:44 PM, mike amundsen <mamund@yahoo.com> wrote:
>
>
> ** cross-posted **
>
> I'm contemplating a working definition for a "Hypermedia Client." Here are my first attempts:
>
> 1 - "A Hypermedia Client supports advancing it's own application state based on application control information supplied in server responses."
>
> 2 - "A Hypermedia Client supports advancing application state by sending requests to servers based on application control information supplied in server responses."
>
> 3 - "A Hypermedia Client supports advancing application state by sending requests to servers based on application control information embedded within, or as a layer above, the presentation of information supplied in server responses."
>
> The germ of this definition is loosely based on a Fielding's description of "Distributed Hypermedia"[1]
>
> The point of this exercise is:
> 1) Is there a generally agreed definition?
> 2) Can a definition be useful in evaluating/analyzing existing implementations? (e.g. "Is 'this' a hypermedia client?")
> 3) Can a definition be useful in creating new implementations that "meet" the definition? (e.g. "Here is what you need to build a hypermedia client....")
>
> Any/all feedback is welcome. Possibly there is "prior art" here of which I am unaware; please point me to any reference material you may think useful. Maybe you've gone through a similar process and would like to send along your experiences.
>
> Thanks in advance.
>
> [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/web_arch_domain.htm#sec_4_1_3
>
> mca
> http://amundsen.com/blog/
> http://twitter.com@mamund
> http://mamund.com/foaf.rdf#me
>
> #RESTFest 2011 - Aug 18-20
> http://restfest.org
>
>
>
>
>
>
>
Glenn: are you indicating that you would prefer to use definition #1 as your "defintion of a hypermedia client" (along w/ something regarding starting URIs)? mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Sat, Oct 22, 2011 at 23:12, Glenn Block <glenn.block@...> wrote: > I owuld add one exception to 1, which is the application root as there > needs to be some known uri which is the entry point of the application > otherwise you'll never access the application in the first place.... > > > On Sat, Oct 22, 2011 at 8:08 PM, Glenn Block <glenn.block@...>wrote: > >> Great to see you fleshing this out. In terms of prior art, Darrel has done >> a bunch of work around hypermedia clients with his Rest Agent stuff. >> >> A few other thoughts >> >> 1. A hypermedia client is not coupled to the uri space of the application >> / does not contain hard coded logic for uris on the server. >> >> 2. A hypermedia client ignores hypermedia controls that it cannot handle / >> it does not break by the presence of new hypermedia controls. >> >> 3. A hypermedia client advances application state by choosing from >> available hypermedia controls in server responses. >> >> >> >> On Fri, Jun 10, 2011 at 1:44 PM, mike amundsen <mamund@...> wrote: >> >>> ** >>> >>> >>> ** cross-posted ** >>> >>> I'm contemplating a working definition for a "Hypermedia Client." Here >>> are my first attempts: >>> >>> 1 - "A Hypermedia Client supports advancing it's own application state >>> based on application control information supplied in server responses." >>> >>> 2 - "A Hypermedia Client supports advancing application state by sending >>> requests to servers based on application control information supplied in >>> server responses." >>> >>> 3 - "A Hypermedia Client supports advancing application state by sending >>> requests to servers based on application control information embedded >>> within, or as a layer above, the presentation of information supplied in >>> server responses." >>> >>> The germ of this definition is loosely based on a Fielding's description >>> of "Distributed Hypermedia"[1] >>> >>> The point of this exercise is: >>> 1) Is there a generally agreed definition? >>> 2) Can a definition be useful in evaluating/analyzing existing >>> implementations? (e.g. "Is 'this' a hypermedia client?") >>> 3) Can a definition be useful in creating new implementations that "meet" >>> the definition? (e.g. "Here is what you need to build a hypermedia >>> client....") >>> >>> Any/all feedback is welcome. Possibly there is "prior art" here of which >>> I am unaware; please point me to any reference material you may think >>> useful. Maybe you've gone through a similar process and would like to send >>> along your experiences. >>> >>> Thanks in advance. >>> >>> [1] >>> http://www.ics.uci.edu/~fielding/pubs/dissertation/web_arch_domain.htm#sec_4_1_3 >>> >>> mca >>> http://amundsen.com/blog/ >>> http://twitter.com@mamund >>> http://mamund.com/foaf.rdf#me >>> >>> #RESTFest 2011 - Aug 18-20 >>> http://restfest.org >>> >>> >>> >> >> >
You mean in your definitions. #1 is my lean with a clarification that the server is offering up available transitions and the client selects the appropriate one based on application specific logic. meaning a hypermedia client is not dumb, but it's range of choices is provided by the server. the current definition implies the client is force fed by the server. Sent from my Windows Phone ------------------------------ From: mike amundsen Sent: 10/23/2011 7:02 AM To: Glenn Block Cc: rest-discuss; hypermedia-web@googlegroups.com Subject: Re: [rest-discuss] Definition of a Hypermedia Client Glenn: are you indicating that you would prefer to use definition #1 as your "defintion of a hypermedia client" (along w/ something regarding starting URIs)? mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Sat, Oct 22, 2011 at 23:12, Glenn Block <glenn.block@...> wrote: > I owuld add one exception to 1, which is the application root as there > needs to be some known uri which is the entry point of the application > otherwise you'll never access the application in the first place.... > > > On Sat, Oct 22, 2011 at 8:08 PM, Glenn Block <glenn.block@...>wrote: > >> Great to see you fleshing this out. In terms of prior art, Darrel has done >> a bunch of work around hypermedia clients with his Rest Agent stuff. >> >> A few other thoughts >> >> 1. A hypermedia client is not coupled to the uri space of the application >> / does not contain hard coded logic for uris on the server. >> >> 2. A hypermedia client ignores hypermedia controls that it cannot handle / >> it does not break by the presence of new hypermedia controls. >> >> 3. A hypermedia client advances application state by choosing from >> available hypermedia controls in server responses. >> >> >> >> On Fri, Jun 10, 2011 at 1:44 PM, mike amundsen <mamund@...> wrote: >> >>> ** >>> >>> >>> ** cross-posted ** >>> >>> I'm contemplating a working definition for a "Hypermedia Client." Here >>> are my first attempts: >>> >>> 1 - "A Hypermedia Client supports advancing it's own application state >>> based on application control information supplied in server responses." >>> >>> 2 - "A Hypermedia Client supports advancing application state by sending >>> requests to servers based on application control information supplied in >>> server responses." >>> >>> 3 - "A Hypermedia Client supports advancing application state by sending >>> requests to servers based on application control information embedded >>> within, or as a layer above, the presentation of information supplied in >>> server responses." >>> >>> The germ of this definition is loosely based on a Fielding's description >>> of "Distributed Hypermedia"[1] >>> >>> The point of this exercise is: >>> 1) Is there a generally agreed definition? >>> 2) Can a definition be useful in evaluating/analyzing existing >>> implementations? (e.g. "Is 'this' a hypermedia client?") >>> 3) Can a definition be useful in creating new implementations that "meet" >>> the definition? (e.g. "Here is what you need to build a hypermedia >>> client....") >>> >>> Any/all feedback is welcome. Possibly there is "prior art" here of which >>> I am unaware; please point me to any reference material you may think >>> useful. Maybe you've gone through a similar process and would like to send >>> along your experiences. >>> >>> Thanks in advance. >>> >>> [1] >>> http://www.ics.uci.edu/~fielding/pubs/dissertation/web_arch_domain.htm#sec_4_1_3 >>> >>> mca >>> http://amundsen.com/blog/ >>> http://twitter.com@mamund >>> http://mamund.com/foaf.rdf#me >>> >>> #RESTFest 2011 - Aug 18-20 >>> http://restfest.org >>> >>> >>> >> >> >
I see. Then I would say the notion of one or more application roots
that the client is aware of is not an anti pattern.
Sent from my Windows Phone
From: Jan Algermissen
Sent: 10/23/2011 1:50 AM
To: Glenn Block
Cc: mike amundsen; rest-discuss; hypermedia-web@...
Subject: Re: [rest-discuss] Definition of a Hypermedia Client
On Oct 23, 2011, at 5:12 AM, Glenn Block wrote:
> I owuld add one exception to 1, which is the application root as there needs to be some known uri which is the entry point of the application otherwise you'll never access the application in the first place....
No necessarily because the client could obtain that URI from a
entry-point-by-service-type lookup (e.g. via DNS[1])
Nevertheless, I'd make clear (sort of as Glenn suggests) that clients
typically know entry URIs (maybe even many) of applications.
Furthermore, I'd make explicit that it is perfectly fine for clients
to keep those entry URIs for as long as they wish[2] (aka bookmarking)
and that this puts a responsibility on servers to maintain all entry
URIs over time ("Cool URIs don't change..." [3]).
Jan
[1] http://www.infoq.com/articles/rest-discovery-dns
[2] Until they see a 40 Gone for it
[3] http://www.w3.org/Provider/Style/URI.html
>
> On Sat, Oct 22, 2011 at 8:08 PM, Glenn Block <glenn.block@...> wrote:
> Great to see you fleshing this out. In terms of prior art, Darrel has done a bunch of work around hypermedia clients with his Rest Agent stuff.
>
> A few other thoughts
>
> 1. A hypermedia client is not coupled to the uri space of the application / does not contain hard coded logic for uris on the server.
>
> 2. A hypermedia client ignores hypermedia controls that it cannot handle / it does not break by the presence of new hypermedia controls.
>
> 3. A hypermedia client advances application state by choosing from available hypermedia controls in server responses.
>
>
>
> On Fri, Jun 10, 2011 at 1:44 PM, mike amundsen <mamund@...> wrote:
>
>
> ** cross-posted **
>
> I'm contemplating a working definition for a "Hypermedia Client." Here are my first attempts:
>
> 1 - "A Hypermedia Client supports advancing it's own application state based on application control information supplied in server responses."
>
> 2 - "A Hypermedia Client supports advancing application state by sending requests to servers based on application control information supplied in server responses."
>
> 3 - "A Hypermedia Client supports advancing application state by sending requests to servers based on application control information embedded within, or as a layer above, the presentation of information supplied in server responses."
>
> The germ of this definition is loosely based on a Fielding's description of "Distributed Hypermedia"[1]
>
> The point of this exercise is:
> 1) Is there a generally agreed definition?
> 2) Can a definition be useful in evaluating/analyzing existing implementations? (e.g. "Is 'this' a hypermedia client?")
> 3) Can a definition be useful in creating new implementations that "meet" the definition? (e.g. "Here is what you need to build a hypermedia client....")
>
> Any/all feedback is welcome. Possibly there is "prior art" here of which I am unaware; please point me to any reference material you may think useful. Maybe you've gone through a similar process and would like to send along your experiences.
>
> Thanks in advance.
>
> [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/web_arch_domain.htm#sec_4_1_3
>
> mca
> http://amundsen.com/blog/
> http://twitter.com@mamund
> http://mamund.com/foaf.rdf#me
>
> #RESTFest 2011 - Aug 18-20
> http://restfest.org
>
>
>
>
>
>
>
thanks for the comments. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Sun, Oct 23, 2011 at 13:59, Glenn Block <glenn.block@...> wrote: > You mean in your definitions. #1 is my lean with a clarification that the > server is offering up available transitions and the client selects the > appropriate one based on application specific logic. > > meaning a hypermedia client is not dumb, but it's range of choices is > provided by the server. > > the current definition implies the client is force fed by the server. > > Sent from my Windows Phone > ------------------------------ > From: mike amundsen > Sent: 10/23/2011 7:02 AM > To: Glenn Block > Cc: rest-discuss; hypermedia-web@...m > Subject: Re: [rest-discuss] Definition of a Hypermedia Client > > Glenn: > > are you indicating that you would prefer to use definition #1 as your > "defintion of a hypermedia client" (along w/ something regarding starting > URIs)? > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > > > On Sat, Oct 22, 2011 at 23:12, Glenn Block <glenn.block@...> wrote: > >> I owuld add one exception to 1, which is the application root as there >> needs to be some known uri which is the entry point of the application >> otherwise you'll never access the application in the first place.... >> >> >> On Sat, Oct 22, 2011 at 8:08 PM, Glenn Block <glenn.block@...>wrote: >> >>> Great to see you fleshing this out. In terms of prior art, Darrel has >>> done a bunch of work around hypermedia clients with his Rest Agent stuff. >>> >>> A few other thoughts >>> >>> 1. A hypermedia client is not coupled to the uri space of the application >>> / does not contain hard coded logic for uris on the server. >>> >>> 2. A hypermedia client ignores hypermedia controls that it cannot handle >>> / it does not break by the presence of new hypermedia controls. >>> >>> 3. A hypermedia client advances application state by choosing from >>> available hypermedia controls in server responses. >>> >>> >>> >>> On Fri, Jun 10, 2011 at 1:44 PM, mike amundsen <mamund@...> wrote: >>> >>>> ** >>>> >>>> >>>> ** cross-posted ** >>>> >>>> I'm contemplating a working definition for a "Hypermedia Client." Here >>>> are my first attempts: >>>> >>>> 1 - "A Hypermedia Client supports advancing it's own application state >>>> based on application control information supplied in server responses." >>>> >>>> 2 - "A Hypermedia Client supports advancing application state by sending >>>> requests to servers based on application control information supplied in >>>> server responses." >>>> >>>> 3 - "A Hypermedia Client supports advancing application state by sending >>>> requests to servers based on application control information embedded >>>> within, or as a layer above, the presentation of information supplied in >>>> server responses." >>>> >>>> The germ of this definition is loosely based on a Fielding's description >>>> of "Distributed Hypermedia"[1] >>>> >>>> The point of this exercise is: >>>> 1) Is there a generally agreed definition? >>>> 2) Can a definition be useful in evaluating/analyzing existing >>>> implementations? (e.g. "Is 'this' a hypermedia client?") >>>> 3) Can a definition be useful in creating new implementations that >>>> "meet" the definition? (e.g. "Here is what you need to build a hypermedia >>>> client....") >>>> >>>> Any/all feedback is welcome. Possibly there is "prior art" here of >>>> which I am unaware; please point me to any reference material you may think >>>> useful. Maybe you've gone through a similar process and would like to send >>>> along your experiences. >>>> >>>> Thanks in advance. >>>> >>>> [1] >>>> http://www.ics.uci.edu/~fielding/pubs/dissertation/web_arch_domain.htm#sec_4_1_3 >>>> >>>> mca >>>> http://amundsen.com/blog/ >>>> http://twitter.com@mamund >>>> http://mamund.com/foaf.rdf#me >>>> >>>> #RESTFest 2011 - Aug 18-20 >>>> http://restfest.org >>>> >>>> >>>> >>> >>> >> >
Regarding evolvability, can we really say if new hypermedia controls should or shouldn't "break" a hypermedia client? This seems to get a bit fuzzy when a new resource interaction may be required for a given application domain.
Hi, I often use the (common) pattern of exposing a "factory" resource that we POST to for creating other resources of a certain type. In some systems I also support PUT for updating the state of resources created using the factory, but do not want to allow creating such resources with PUT. In such contexts, which HTTP status code do you advise to return when an attempt is made to PUT to a request URI that does not identify an already existing resource? In other words, how do you signal that creation at that URI with PUT isn't possible because of this particular application design? I have often used 404, but I'm also seeing 405 or even 403 being used. I'm curious about the collective wisdom of this group on this subject. Thanks, Philippe Mougin
I think returning 405 is the most approriate status code in this case: 405 Method Not Allowed The method specified in the Request-Line is not allowed for the resource identified by the Request-URI. The response MUST include an Allow header containing a list of valid methods for the requested resource. -- Markus Lanthaler @markuslanthaler From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On Behalf Of Philippe Mougin Sent: Tuesday, October 25, 2011 4:29 PM To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Status code when PUT musn't be used to create a resource Hi, I often use the (common) pattern of exposing a "factory" resource that we POST to for creating other resources of a certain type. In some systems I also support PUT for updating the state of resources created using the factory, but do not want to allow creating such resources with PUT. In such contexts, which HTTP status code do you advise to return when an attempt is made to PUT to a request URI that does not identify an already existing resource? In other words, how do you signal that creation at that URI with PUT isn't possible because of this particular application design? I have often used 404, but I'm also seeing 405 or even 403 being used. I'm curious about the collective wisdom of this group on this subject. Thanks, Philippe Mougin
On Sun, Oct 23, 2011 at 12:39 PM, bruce.krakower <bruce.krakower@...> wrote: > > Regarding evolvability, can we really say if new hypermedia controls should or > shouldn't "break" a hypermedia client? This seems to get a bit fuzzy when a new > resource interaction may be required for a given application domain. fwiw, yes, i think it's fair to say a new state transition offered by the server shouldn't break a hypermedia client. but then, i also agree with mnot's principle[1] and I think the two are related. "there is an underlying principle to almost any kind of of versioning on the Web; not breaking existing clients." Thanks, --tim [1] - http://www.mnot.net/blog/2011/10/25/web_api_versioning_smackdown
Thanks Markus. My little concern with 405 is that it kind of implicitly implies that the resource identified by the Request URI exists... That might not be what the spec really intends to communicate (or it might be that my reading is a bit off), though. Philippe --- In rest-discuss@...m, "Markus Lanthaler" <markus.lanthaler@...> wrote: > > I think returning 405 is the most approriate status code in this case: > > 405 Method Not Allowed > > The method specified in the Request-Line is not allowed for the resource > identified by the Request-URI. The response MUST include an Allow header > containing a list of valid methods for the requested resource. > > -- > > Markus Lanthaler > > @markuslanthaler > > > > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On > Behalf Of Philippe Mougin > Sent: Tuesday, October 25, 2011 4:29 PM > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] Status code when PUT musn't be used to create a > resource > > > > Hi, > I often use the (common) pattern of exposing a "factory" resource that we > POST to for creating other resources of a certain type. In some systems I > also support PUT for updating the state of resources created using the > factory, but do not want to allow creating such resources with PUT. > In such contexts, which HTTP status code do you advise to return when an > attempt is made to PUT to a request URI that does not identify an already > existing resource? In other words, how do you signal that creation at that > URI with PUT isn't possible because of this particular application design? > I have often used 404, but I'm also seeing 405 or even 403 being used. > I'm curious about the collective wisdom of this group on this subject. > Thanks, > Philippe Mougin >
The common idiom for "method didn't work because the resource isn't in a state that allows it" is 409 Conflict. You should return a message stating that the user should POST to the factory first, and then the PUT would work. Robert Brewer fumanchu@... > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest- > discuss@yahoogroups.com] On Behalf Of Philippe Mougin > Sent: Tuesday, October 25, 2011 5:21 AM > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] Re: Status code when PUT musn't be used to > create a resource > > Thanks Markus. My little concern with 405 is that it kind of implicitly > implies that the resource identified by the Request URI exists... That > might not be what the spec really intends to communicate (or it might > be that my reading is a bit off), though. > > Philippe > > --- In rest-discuss@yahoogroups.com, "Markus Lanthaler" > <markus.lanthaler@...> wrote: > > > > I think returning 405 is the most approriate status code in this > case: > > > > 405 Method Not Allowed > > > > The method specified in the Request-Line is not allowed for the > resource > > identified by the Request-URI. The response MUST include an Allow > header > > containing a list of valid methods for the requested resource. > > > > -- > > > > Markus Lanthaler > > > > @markuslanthaler > > > > > > > > From: rest-discuss@yahoogroups.com [mailto:rest- > discuss@yahoogroups.com] On > > Behalf Of Philippe Mougin > > Sent: Tuesday, October 25, 2011 4:29 PM > > To: rest-discuss@yahoogroups.com > > Subject: [rest-discuss] Status code when PUT musn't be used to create > a > > resource > > > > > > > Hi, > > I often use the (common) pattern of exposing a "factory" resource > that we > > POST to for creating other resources of a certain type. In some > systems I > > also support PUT for updating the state of resources created using > the > > factory, but do not want to allow creating such resources with PUT. > > In such contexts, which HTTP status code do you advise to return when > an > > attempt is made to PUT to a request URI that does not identify an > already > > existing resource? In other words, how do you signal that creation at > that > > URI with PUT isn't possible because of this particular application > design? > > I have often used 404, but I'm also seeing 405 or even 403 being > used. > > I'm curious about the collective wisdom of this group on this > subject. > > Thanks, > > Philippe Mougin > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
If you look at how PUT is defined [1] it doesn't imply that the resource exists. Thus a 405 in my opinion is the right response code. [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.6 -- Markus Lanthaler @markuslanthaler --- Philippe Mougin wrote: Thanks Markus. My little concern with 405 is that it kind of implicitly implies that the resource identified by the Request URI exists... That might not be what the spec really intends to communicate (or it might be that my reading is a bit off), though. Philippe --- In rest-discuss@yahoogroups.com, "Markus Lanthaler" <markus.lanthaler@...> wrote: > > I think returning 405 is the most approriate status code in this case: > > 405 Method Not Allowed > > The method specified in the Request-Line is not allowed for the resource > identified by the Request-URI. The response MUST include an Allow header > containing a list of valid methods for the requested resource. > > -- > > Markus Lanthaler > > @markuslanthaler > > > > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] On > Behalf Of Philippe Mougin > Sent: Tuesday, October 25, 2011 4:29 PM > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] Status code when PUT musn't be used to create a > resource > > > > Hi, > I often use the (common) pattern of exposing a "factory" resource that we > POST to for creating other resources of a certain type. In some systems I > also support PUT for updating the state of resources created using the > factory, but do not want to allow creating such resources with PUT. > In such contexts, which HTTP status code do you advise to return when an > attempt is made to PUT to a request URI that does not identify an already > existing resource? In other words, how do you signal that creation at that > URI with PUT isn't possible because of this particular application design? > I have often used 404, but I'm also seeing 405 or even 403 being used. > I'm curious about the collective wisdom of this group on this subject. > Thanks, > Philippe Mougin >
On Oct 25, 2011, at 4:49 PM, Markus Lanthaler wrote: > If you look at how PUT is defined [1] it doesn't imply that the resource > exists. Thus a 405 in my opinion is the right response code. It depends on the server: - if the resource exists, use 405 (doing a GET instead of PUT makes sense) - if it does not exist, use 404 (expectation would be that a GET would also show 404) Jan > > [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.6 > > -- > Markus Lanthaler > @markuslanthaler > > --- Philippe Mougin wrote: > > Thanks Markus. My little concern with 405 is that it kind of implicitly > implies that the resource identified by the Request URI exists... That might > not be what the spec really intends to communicate (or it might be that my > reading is a bit off), though. > > Philippe > > --- In rest-discuss@yahoogroups.com, "Markus Lanthaler" > <markus.lanthaler@...> wrote: > > > > I think returning 405 is the most approriate status code in this case: > > > > 405 Method Not Allowed > > > > The method specified in the Request-Line is not allowed for the resource > > identified by the Request-URI. The response MUST include an Allow header > > containing a list of valid methods for the requested resource. > > > > -- > > > > Markus Lanthaler > > > > @markuslanthaler > > > > > > > > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] > On > > Behalf Of Philippe Mougin > > Sent: Tuesday, October 25, 2011 4:29 PM > > To: rest-discuss@yahoogroups.com > > Subject: [rest-discuss] Status code when PUT musn't be used to create a > > resource > > > > > > > Hi, > > I often use the (common) pattern of exposing a "factory" resource that we > > POST to for creating other resources of a certain type. In some systems I > > also support PUT for updating the state of resources created using the > > factory, but do not want to allow creating such resources with PUT. > > In such contexts, which HTTP status code do you advise to return when an > > attempt is made to PUT to a request URI that does not identify an already > > existing resource? In other words, how do you signal that creation at that > > URI with PUT isn't possible because of this particular application design? > > > I have often used 404, but I'm also seeing 405 or even 403 being used. > > I'm curious about the collective wisdom of this group on this subject. > > Thanks, > > Philippe Mougin > > > >
On 2011-10-25 04:15 , Tim Williams wrote: > On Sun, Oct 23, 2011 at 12:39 PM, bruce.krakower <bruce.krakower@...> wrote: >> Regarding evolvability, can we really say if new hypermedia controls should or >> shouldn't "break" a hypermedia client? This seems to get a bit fuzzy when a new >> resource interaction may be required for a given application domain. > fwiw, yes, i think it's fair to say a new state transition offered by > the server shouldn't break a hypermedia client. but then, i also > agree with mnot's principle[1] and I think the two are related. > "there is an underlying principle to almost any kind of of versioning > on the Web; not breaking existing clients." maybe include some recommendation to think about this when defining media types and link types, and to also document whether there are "mustUnderstand" or "mustIgnore" semantics at work for extensions? cheers, dret.
On Sat, Oct 22, 2011 at 11:12 PM, Glenn Block <glenn.block@...> wrote: > > > I owuld add one exception to 1, which is the application root as there > needs to be some known uri which is the entry point of the application > otherwise you'll never access the application in the first place.... > I would add another exception to (1): A hypermedia client IS coupled to the link relation uri space of the media type used (and possibly extended) by the application. See my comment on mnot's blog post for more details: http://www.mnot.net/blog/2011/10/25/web_api_versioning_smackdown You can shift tight coupling around, but you can't eliminate it... -- Nick
If the link relations are dereferenceable URIs, you can use HTTP to hint to clients about the freshness of the semantics (i.e. with Cache-Control, ETags, etc.) Cheers, Mike On Tue, Oct 25, 2011 at 5:08 PM, Nick Gall <nick.gall@...> wrote: > > > On Sat, Oct 22, 2011 at 11:12 PM, Glenn Block <glenn.block@...>wrote: > >> >> >> I owuld add one exception to 1, which is the application root as there >> needs to be some known uri which is the entry point of the application >> otherwise you'll never access the application in the first place.... >> > > I would add another exception to (1): A hypermedia client IS coupled to > the link relation uri space of the media type used (and possibly extended) > by the application. See my comment on mnot's blog post for more details: > http://www.mnot.net/blog/2011/10/25/web_api_versioning_smackdown > > You can shift tight coupling around, but you can't eliminate it... > > -- Nick > > > >
On Tue, Oct 25, 2011 at 12:30 PM, Mike Kelly <mike@...> wrote: > If the link relations are dereferenceable URIs, you can use HTTP to hint to > clients about the freshness of the semantics (i.e. with Cache-Control, > ETags, etc.) Indeed. And that's a good practice. But it will only give you "hints" about the coupling and whether it is still "intact". It won't prevent breaking the coupling if you change the semantics. And it won't prevent the need to create a new link relation namespace if you want to enable new semantics while leaving the old namespace with its semantics intact. -- Nick
On Tue, Oct 25, 2011 at 5:43 PM, Nick Gall <nick.gall@...> wrote: > On Tue, Oct 25, 2011 at 12:30 PM, Mike Kelly <mike@...> wrote: >> >> If the link relations are dereferenceable URIs, you can use HTTP to hint >> to clients about the freshness of the semantics (i.e. with Cache-Control, >> ETags, etc.) > > Indeed. And that's a good practice. But it will only give you "hints" about > the coupling and whether it is still "intact". It won't prevent breaking the > coupling if you change the semantics. And it won't prevent the need to > create a new link relation namespace if you want to enable new semantics > while leaving the old namespace with its semantics intact. It does prevent the need provided the changes aren't breaking. Is that not enough? Cheers, Mike
On Tue, Oct 25, 2011 at 12:54 PM, Mike Kelly <mike@...> wrote: > On Tue, Oct 25, 2011 at 5:43 PM, Nick Gall <nick.gall@...> wrote: > > On Tue, Oct 25, 2011 at 12:30 PM, Mike Kelly <mike@...> wrote: > >> > >> If the link relations are dereferenceable URIs, you can use HTTP to hint > >> to clients about the freshness of the semantics (i.e. with > Cache-Control, > >> ETags, etc.) > > > > Indeed. And that's a good practice. But it will only give you "hints" > about > > the coupling and whether it is still "intact". It won't prevent breaking > the > > coupling if you change the semantics. And it won't prevent the need to > > create a new link relation namespace if you want to enable new semantics > > while leaving the old namespace with its semantics intact. > > It does prevent the need provided the changes aren't breaking. > > Is that not enough? > No. Inevitably, if the hypermedia interface is successful and evolves, some change WILL be breaking. That is the whole point of mnot's excellent blog post on the subject. -- Nick
To clarify, I was not referring to the wording of the PUT definition, but to the wording of the 405 status code definition (i.e., what you quoted in your previous message). Philippe --- In rest-discuss@yahoogroups.com, "Markus Lanthaler" <markus.lanthaler@...> wrote: > > If you look at how PUT is defined [1] it doesn't imply that the resource > exists. Thus a 405 in my opinion is the right response code. > > [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.6 > > > -- > Markus Lanthaler > @markuslanthaler > > --- Philippe Mougin wrote: > > Thanks Markus. My little concern with 405 is that it kind of implicitly > implies that the resource identified by the Request URI exists... That might > not be what the spec really intends to communicate (or it might be that my > reading is a bit off), though. > > Philippe > > > --- In rest-discuss@yahoogroups.com, "Markus Lanthaler" > <markus.lanthaler@> wrote: > > > > I think returning 405 is the most approriate status code in this case: > > > > 405 Method Not Allowed > > > > The method specified in the Request-Line is not allowed for the resource > > identified by the Request-URI. The response MUST include an Allow header > > containing a list of valid methods for the requested resource. > > > > -- > > > > Markus Lanthaler > > > > @markuslanthaler > > > > > > > > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] > On > > Behalf Of Philippe Mougin > > Sent: Tuesday, October 25, 2011 4:29 PM > > To: rest-discuss@yahoogroups.com > > Subject: [rest-discuss] Status code when PUT musn't be used to create a > > resource > > > > > > > Hi, > > I often use the (common) pattern of exposing a "factory" resource that we > > POST to for creating other resources of a certain type. In some systems I > > also support PUT for updating the state of resources created using the > > factory, but do not want to allow creating such resources with PUT. > > In such contexts, which HTTP status code do you advise to return when an > > attempt is made to PUT to a request URI that does not identify an already > > existing resource? In other words, how do you signal that creation at that > > URI with PUT isn't possible because of this particular application design? > > > I have often used 404, but I'm also seeing 405 or even 403 being used. > > I'm curious about the collective wisdom of this group on this subject. > > Thanks, > > Philippe Mougin > > >
On Tue, Oct 25, 2011 at 6:08 PM, Nick Gall <nick.gall@...> wrote: > On Tue, Oct 25, 2011 at 12:54 PM, Mike Kelly <mike@...> wrote: >> >> On Tue, Oct 25, 2011 at 5:43 PM, Nick Gall <nick.gall@...> wrote: >> > On Tue, Oct 25, 2011 at 12:30 PM, Mike Kelly <mike@...> wrote: >> >> >> >> If the link relations are dereferenceable URIs, you can use HTTP to >> >> hint >> >> to clients about the freshness of the semantics (i.e. with >> >> Cache-Control, >> >> ETags, etc.) >> > >> > Indeed. And that's a good practice. But it will only give you "hints" >> > about >> > the coupling and whether it is still "intact". It won't prevent breaking >> > the >> > coupling if you change the semantics. And it won't prevent the need to >> > create a new link relation namespace if you want to enable new semantics >> > while leaving the old namespace with its semantics intact. >> >> It does prevent the need provided the changes aren't breaking. >> >> Is that not enough? > > No. Inevitably, if the hypermedia interface is successful and evolves, some > change WILL be breaking. That is the whole point of mnot's excellent blog > post on the subject. Right, the point being it gets you far enough so as there's no need to be concerned about having to create an additional link relation for a breaking change, as having to do so is reduced to the lowest rate realistic/possible for a machine-consumed application. On another note; I'm surprised nobody's preached the mystical evolvability-inducing powers of 'forms' yet - apparently they solve this problem. Or so we're told.. ;) Cheers, Mike
On Tue, Oct 25, 2011 at 10:29 AM, Mike Kelly <mike@...> wrote: > ** > Right, the point being it gets you far enough so as there's no need to > be concerned about having to create an additional link relation for a > breaking change, as having to do so is reduced to the lowest rate > realistic/possible for a machine-consumed application. > > On another note; I'm surprised nobody's preached the mystical > evolvability-inducing powers of 'forms' yet - apparently they solve > this problem. Or so we're told.. ;) > The forms offer insight in to what the server is expecting for a particular operation. The interface will evolve in both directions. On the server side, in the number and variety of links that it publishes as the client navigates. Whether a client can actually recognize and support the new links is not really relevant. The clients (especially machine clients) will remain steadfast in their interpretation of the representation until a "breaking" event happens. This breaking event is either some underlying, foundation failure (i.e. the format has changed beyond recognition for the client), or its imposed externally upon the client by the clients controller (for example the want to now support some of this new functionality). Through inspection, a client can look at the resource representation and determine what links are supported by the server, and match that list to the understanding of the client, as in which of those links it knows how to follow based on their rels. From an input perspective, the FORM publishes the information that the server wants to know. The process for the client is the same. The client will have to evolve in order to handle any new form fields just like it would have to evolve to handle any new links that it discovers. But the client will continue to process the "old way" until the breaking event occurs. In a space of several servers, using compatible representations, the use of the form can be used to allow, for example, a more sophisticated client to interact with a less sophisticated server. Consider the case of a purchasing agent wanting to forward credit card information. A modern server would likely want not just the normal CC info (number, name, exp date) but also that magic code on the back of the card. An older server may not require that extra code. A client that is capable and aware of that code would see that an older server is simply not asking for this information and will there for not provide it rather than just cramming it down the servers throat "because they all do this". In the end, the client needs to understand how to interpret what it sees in the payloads, and well as how to populate the forms that are requested by the servers. The advantage of the forms is that it lets the servers be reasonably explicit as to what it wants without having to rely on the client simply "knowing". With friendly clients and servers, this makes the both ends of the interface much more evolutionary, and uses in band information to manage this to boot. Regards, Will Hartung (willh@...)
On Tue, Oct 25, 2011 at 7:03 PM, Will Hartung <willh@...> wrote: > > > On Tue, Oct 25, 2011 at 10:29 AM, Mike Kelly <mike@...> wrote: >> >> Right, the point being it gets you far enough so as there's no need to >> be concerned about having to create an additional link relation for a >> breaking change, as having to do so is reduced to the lowest rate >> realistic/possible for a machine-consumed application. >> >> On another note; I'm surprised nobody's preached the mystical >> evolvability-inducing powers of 'forms' yet - apparently they solve >> this problem. Or so we're told.. ;) > > The forms offer insight in to what the server is expecting for a particular > operation. > > The interface will evolve in both directions. On the server side, in the > number and variety of links that it publishes as the client navigates. > Whether a client can actually recognize and support the new links is not > really relevant. The clients (especially machine clients) will remain > steadfast in their interpretation of the representation until a "breaking" > event happens. > > This breaking event is either some underlying, foundation failure (i.e. the > format has changed beyond recognition for the client), or its imposed > externally upon the client by the clients controller (for example the want > to now support some of this new functionality). > > Through inspection, a client can look at the resource representation and > determine what links are supported by the server, and match that list to the > understanding of the client, as in which of those links it knows how to > follow based on their rels. > > From an input perspective, the FORM publishes the information that the > server wants to know. The process for the client is the same. The client > will have to evolve in order to handle any new form fields just like it > would have to evolve to handle any new links that it discovers. But the > client will continue to process the "old way" until the breaking event > occurs. i.e. in this respect, forms achieve the same thing as link relations but in a more complicated way? > In a space of several servers, using compatible representations, the use of > the form can be used to allow, for example, a more sophisticated client to > interact with a less sophisticated server. > > Consider the case of a purchasing agent wanting to forward credit card > information. A modern server would likely want not just the normal CC info > (number, name, exp date) but also that magic code on the back of the card. > An older server may not require that extra code. > > A client that is capable and aware of that code would see that an older > server is simply not asking for this information and will there for not > provide it rather than just cramming it down the servers throat "because > they all do this". The problem with this is that it can be handled trivially on the server side (by ignoring irrelevant parts of the client request), in contrast to the complexity introduced on the client side by requiring them to incorporate dynamic form interaction to their work flow. Most people want to make life as easy as possible for their clients. If there's a means to an ends resulting in less complexity for clients it's very likely that will get picked - and rightly so. > In the end, the client needs to understand how to interpret what it sees in > the payloads, and well as how to populate the forms that are requested by > the servers. The advantage of the forms is that it lets the servers be > reasonably explicit as to what it wants without having to rely on the client > simply "knowing". With friendly clients and servers, this makes the both > ends of the interface much more evolutionary, and uses in band information > to manage this to boot. I disagree that the result is much more evolutionary. Afaict, omitting and renaming the data produced by clients at run time is the extent of the additional capabilities forms introduce. As I said above, I'm unconvinced that the cost of adding significant additional requirements to interact with your application outweigh the benefits - particularly given that the benefits seem to be at best marginal, and at worst non-existent. Cheers, Mike
On the contrary, since it specifies that "The response MUST include an Allow header containing a list of valid methods for the requested resource," it seems to me that you are indicating that it is a valid resource. That does NOT imply that GET is allowed, just that something is allowed. 405 is more semantically correct, but you could return a 404 in any case as a kind of security-through-obsucurity. REST in Practice has a note: "Defaulting to a 404 Not Found response is commonplace on the Web in situations where a consumer can’t make any forward progress. We can adopt the same approach for our web services, using 404 to indicate simply that no further action is allowed; in many circumstances, we’d rather do this than give away more specific details (such as would be conveyed by 401, 405, 409, or 413), any of which might give an attacker a useful glimpse into the state of the service." On Oct 25, 2011, at 10:14 AM, Philippe Mougin wrote: > To clarify, I was not referring to the wording of the PUT definition, but to the wording of the 405 status code definition (i.e., what you quoted in your previous message). > > Philippe > > --- In rest-discuss@yahoogroups.com, "Markus Lanthaler" <markus.lanthaler@...> wrote: > > > > If you look at how PUT is defined [1] it doesn't imply that the resource > > exists. Thus a 405 in my opinion is the right response code. > > > > [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.6 > > > > > > -- > > Markus Lanthaler > > @markuslanthaler > > > > --- Philippe Mougin wrote: > > > > Thanks Markus. My little concern with 405 is that it kind of implicitly > > implies that the resource identified by the Request URI exists... That might > > not be what the spec really intends to communicate (or it might be that my > > reading is a bit off), though. > > > > Philippe > > > > > > --- In rest-discuss@yahoogroups.com, "Markus Lanthaler" > > <markus.lanthaler@> wrote: > > > > > > I think returning 405 is the most approriate status code in this case: > > > > > > 405 Method Not Allowed > > > > > > The method specified in the Request-Line is not allowed for the resource > > > identified by the Request-URI. The response MUST include an Allow header > > > containing a list of valid methods for the requested resource. > > > > > > -- > > > > > > Markus Lanthaler > > > > > > @markuslanthaler > > > > > > > > > > > > From: rest-discuss@yahoogroups.com [mailto:rest-discuss@yahoogroups.com] > > On > > > Behalf Of Philippe Mougin > > > Sent: Tuesday, October 25, 2011 4:29 PM > > > To: rest-discuss@yahoogroups.com > > > Subject: [rest-discuss] Status code when PUT musn't be used to create a > > > resource > > > > > > > > > > Hi, > > > I often use the (common) pattern of exposing a "factory" resource that we > > > POST to for creating other resources of a certain type. In some systems I > > > also support PUT for updating the state of resources created using the > > > factory, but do not want to allow creating such resources with PUT. > > > In such contexts, which HTTP status code do you advise to return when an > > > attempt is made to PUT to a request URI that does not identify an already > > > existing resource? In other words, how do you signal that creation at that > > > URI with PUT isn't possible because of this particular application design? > > > > > I have often used 404, but I'm also seeing 405 or even 403 being used. > > > I'm curious about the collective wisdom of this group on this subject. > > > Thanks, > > > Philippe Mougin > > > > > > >
I say yes, it should not break the client. This is more than just Hypermedia it's a general principal applied to HTTP. If I introduce new hypermedia controls, older clients should just ignore them. On Sun, Oct 23, 2011 at 9:39 AM, bruce.krakower <bruce.krakower@...>wrote: > ** > > > Regarding evolvability, can we really say if new hypermedia controls should > or shouldn't "break" a hypermedia client? This seems to get a bit fuzzy when > a new resource interaction may be required for a given application domain. > >
Hello there, I've just joined the REST Discuss group and I have several questions that I've still not found and answer for. One of these is about hierarchical relationships (many belong to one) in a RESTfull API. For instance if there are two kinds of resources, articles and comments, with the following URLs: GET /articles - lists the articles GET /articles/:ID - shows one specific article GET /comment/:ID - gives back one comment what is the right way to have an API list the comments that belong to one of the articles. One option I thought of would be to have something like: GET /articles/:ID/comments But this doesn't feel quite right and it doesn't seem to scale if the nesting is more than one level deep. What are your thoughts on this? Constantin Tovisi
On Wed, Oct 26, 2011 at 11:43 AM, titel <constantin.tovisi@...> wrote: > > > > Hello there, > > I've just joined the REST Discuss group and I have several questions that I've still not found and answer for. > > One of these is about hierarchical relationships (many belong to one) in a RESTfull API. > > For instance if there are two kinds of resources, articles and comments, with the following URLs: > > GET /articles - lists the articles > GET /articles/:ID - shows one specific article > > GET /comment/:ID - gives back one comment > > what is the right way to have an API list the comments that belong to one of the articles. > > One option I thought of would be to have something like: > > GET /articles/:ID/comments > > But this doesn't feel quite right and it doesn't seem to scale if the nesting is more than one level deep. > > What are your thoughts on this? Hi Constantin, the way you structure your URLs does not really matters with REST, but, as many like me point out, it's cool to provide nice URLs to your clients if you correctly implement REST's hypermedia tenet. I still don't have a firm idea about your use case, but some months ago I was looking at HTSQL[1] do basically do this kinda things. [1] http://htsql.org/doc/introduction.html > > Constantin Tovisi > > -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
On Wed, Oct 26, 2011 at 11:43 AM, titel <constantin.tovisi@...> wrote: > ** > > > Hello there, > > I've just joined the REST Discuss group and I have several questions that > I've still not found and answer for. > > One of these is about hierarchical relationships (many belong to one) in a > RESTfull API. > > Hierarchies in a RESTful API should be expressed as links between parents and children, not what the URLs look like. Choose a URL structure that you think will not change for a few years. e.g. Amazon's web page for "Childrens' Books" might be structurally "beneath" the page for "Books". A hierachy if I ever saw one. However the URLs don't reflect this. Instead, the page for Books includes a link to the "child" pages, and the "Childrens' books" page links back up to the "Books" page. Likewise in an API, you should expose links between your hierarchical resources indicating how to go "up" and/or "down" the hierarchy. There's an internet-draft for expressing these relations inside atom links, although IMHO the "up" link relation as it's defined is a bit limiting, since it says that the resource is a "list of parent resources", when the normal case would be (in a hierarchy) that an item only has a single parent. I would have fixed it by having many "up" links when a resource belongs to different hierarchies, or is part of a directed graph. But that's me. Here's the internet draft: http://tools.ietf.org/html/draft-divilly-atom-hierarchy-03 -- -mogsie-
Mike, you have a very restrictive or biased views on forms, that may have been induced by xforms or some other technology. At heart, a form is only a mean to say to a client "i'll let you know where you can put the data in that bit i give you", or sometimes "do as usual but start with this content", as well as some additional control data. I'd argue that introducing forms and clients that can process them is a more expensive solution that introduces an ability to change the server independently of the client, and removes some of the domain-specific modelling from the media type. Depending on your scenario that cost may be justified or not. HTML browsers have full justification for this, and if you disagree that HTML forms were a good thing then I'll just agree to disagree with you. I'll also disagree that implementing form-based interaction has to be complicated. Be it in a markup world or a json world, those things can be built and extended fairly cheaply, and that includes using a small subset of xforms as IanR has shown many times. Forms are a tool that has proven itself valuable for those that have implemented them, for their scenarios, and blanket statements are helping no one on this list or elsewhere implement anything that matches their needs any better. ________________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Mike Kelly [mike@...] Sent: 25 October 2011 18:29 To: Nick Gall Cc: Glenn Block; mike amundsen; rest-discuss; hypermedia-web@... Subject: Re: [rest-discuss] Definition of a Hypermedia Client On Tue, Oct 25, 2011 at 6:08 PM, Nick Gall <nick.gall@...> wrote: > On Tue, Oct 25, 2011 at 12:54 PM, Mike Kelly <mike@...> wrote: >> >> On Tue, Oct 25, 2011 at 5:43 PM, Nick Gall <nick.gall@...> wrote: >> > On Tue, Oct 25, 2011 at 12:30 PM, Mike Kelly <mike@...> wrote: >> >> >> >> If the link relations are dereferenceable URIs, you can use HTTP to >> >> hint >> >> to clients about the freshness of the semantics (i.e. with >> >> Cache-Control, >> >> ETags, etc.) >> > >> > Indeed. And that's a good practice. But it will only give you "hints" >> > about >> > the coupling and whether it is still "intact". It won't prevent breaking >> > the >> > coupling if you change the semantics. And it won't prevent the need to >> > create a new link relation namespace if you want to enable new semantics >> > while leaving the old namespace with its semantics intact. >> >> It does prevent the need provided the changes aren't breaking. >> >> Is that not enough? > > No. Inevitably, if the hypermedia interface is successful and evolves, some > change WILL be breaking. That is the whole point of mnot's excellent blog > post on the subject. Right, the point being it gets you far enough so as there's no need to be concerned about having to create an additional link relation for a breaking change, as having to do so is reduced to the lowest rate realistic/possible for a machine-consumed application. On another note; I'm surprised nobody's preached the mystical evolvability-inducing powers of 'forms' yet - apparently they solve this problem. Or so we're told.. ;) Cheers, Mike ------------------------------------ Yahoo! Groups Links
I keep hearing that the way you structure you URL's doesn't really matter in REST, but doesn't it affect caching? I could definitely be wrong about this, if anyone would set me straight I would appreciate it. Given two options: * Hierarchical: /articles/:ID/comments/:ID * Flat: /articles/:ID and /comments/:ID Say all are cacheable, we have cached an article with ID=1 and comment with ID=2 So my understanding is, with POST or PUT: * Hierarchical: ** POST /articles/1/comments will invalidate the cache for /articles/1/comments, but not for /articles/1 ** POST /articles/ will invalidate the cache for /articles, /articles/1, /articles/1/comments and /articles/1/comments/2 * Flat: ** POST /comments/ will invalidate the cache for /comments/ and /comments/2 and nothing else. ** POST /articles/ will invalidate the cache for /articles/ and /articles/1 and nothing else (leaving the cache for comments intact). First, is this supposed to be how things work (i.e. some kind of spec)? Second, whether there is a spec or not, is it actually how things work in proxies? (I know local browser caches or iPhone cache doesn't doesn't really work that way.) On Oct 26, 2011, at 5:28 AM, Alessandro Nadalin wrote: > On Wed, Oct 26, 2011 at 11:43 AM, titel <constantin.tovisi@...> wrote: >> >> >> >> Hello there, >> >> I've just joined the REST Discuss group and I have several questions that I've still not found and answer for. >> >> One of these is about hierarchical relationships (many belong to one) in a RESTfull API. >> >> For instance if there are two kinds of resources, articles and comments, with the following URLs: >> >> GET /articles - lists the articles >> GET /articles/:ID - shows one specific article >> >> GET /comment/:ID - gives back one comment >> >> what is the right way to have an API list the comments that belong to one of the articles. >> >> One option I thought of would be to have something like: >> >> GET /articles/:ID/comments >> >> But this doesn't feel quite right and it doesn't seem to scale if the nesting is more than one level deep. >> >> What are your thoughts on this? > > Hi Constantin, > the way you structure your URLs does not really matters with REST, > but, as many like me point out, it's cool to provide nice URLs to your > clients if you correctly implement REST's hypermedia tenet. > I still don't have a firm idea about your use case, but some months > ago I was looking at HTSQL[1] do basically do this kinda things. > > > [1] http://htsql.org/doc/introduction.html > >> >> Constantin Tovisi >> >> > > > -- > Nadalin Alessandro > www.odino.org > www.twitter.com/_odino_ > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Wed, Oct 26, 2011 at 8:10 PM, Jason Erickson <jason@...>wrote:
> I keep hearing that the way you structure you URL's doesn't really matter
> in REST, but doesn't it affect caching? I could definitely be wrong about
> this, if anyone would set me straight I would appreciate it.
>
[...]
> ** POST /articles/ will invalidate the cache for /articles, /articles/1,
> /articles/1/comments and /articles/1/comments/2
>
[...]
> First, is this supposed to be how things work (i.e. some kind of spec)?
>
POST invalidating a resource [1] does not imply invalidating a "sub"
resource (in the hierarchical sense of the URI Generic Syntax [2]). a POST
invalidates the URI itself, perhaps a Location or Content-Location, but only
identifiable resources. Imagine POSTing to the root resource ("/") of any
server, automatically invalidating all caches. Crazy :-)
URI Generic Syntax specifies that URIs use a hierarchical syntax, and that
URIs have a hierarchical portion (path component) and a non-hierarchical
part (query component), but URIs the primary role of an URI is to identify a
resource.
If you have a resource that is a logical hierarchy, it's tempting to re-use
the hierarchy in the URI path component. I would ask you if you're pretty
sure that, three years from now, or ten years from now, your resources are
still in the same hierarchy.
Second, whether there is a spec or not, is it actually how things work in
> proxies? (I know local browser caches or iPhone cache doesn't doesn't really
> work that way.)
>
I'm pretty sure proxies don't do this either.
[1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.10
[2] http://tools.ietf.org/html/rfc3986#section-1.2.3
--
-mogsie-
Thanks, that clears things up quite a bit for me and also shows that URL structure indeed does not matter. So if I wanted the behavior of invalidating sub-resources, there's no way to do it in a guaranteed way and the only way to do it at all would be to ask the client (in documentation) to explicitly revalidate sub-resources.
Is there any way to tell a client in a response to invalidate any resources? (For example, I PUT to /articles/1 and I'd like to say in the response that the cached version of /articles/ and /articles/1/comments are not fresh.)
On Oct 26, 2011, at 12:13 PM, Erik Mogensen wrote:
> On Wed, Oct 26, 2011 at 8:10 PM, Jason Erickson <jason@jasonerickson.com> wrote:
>
> I keep hearing that the way you structure you URL's doesn't really matter in REST, but doesn't it affect caching? I could definitely be wrong about this, if anyone would set me straight I would appreciate it.
> [...]
> ** POST /articles/ will invalidate the cache for /articles, /articles/1, /articles/1/comments and /articles/1/comments/2
> [...]
> First, is this supposed to be how things work (i.e. some kind of spec)?
>
> POST invalidating a resource [1] does not imply invalidating a "sub" resource (in the hierarchical sense of the URI Generic Syntax [2]). a POST invalidates the URI itself, perhaps a Location or Content-Location, but only identifiable resources. Imagine POSTing to the root resource ("/") of any server, automatically invalidating all caches. Crazy :-)
>
> URI Generic Syntax specifies that URIs use a hierarchical syntax, and that URIs have a hierarchical portion (path component) and a non-hierarchical part (query component), but URIs the primary role of an URI is to identify a resource.
>
> If you have a resource that is a logical hierarchy, it's tempting to re-use the hierarchy in the URI path component. I would ask you if you're pretty sure that, three years from now, or ten years from now, your resources are still in the same hierarchy.
>
> Second, whether there is a spec or not, is it actually how things work in proxies? (I know local browser caches or iPhone cache doesn't doesn't really work that way.)
>
> I'm pretty sure proxies don't do this either.
>
> [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.10
> [2] http://tools.ietf.org/html/rfc3986#section-1.2.3
> --
> -mogsie-
>
>
hello.
On 2011-10-26 15:41 , Jason Erickson wrote:
> Is there any way to tell a client in a response to invalidate any
> resources? (For example, I PUT to /articles/1 and I'd like to say in the
> response that the cached version of /articles/ and /articles/1/comments
> are not fresh.)
nope. a cache is not something you can manipulate at will from the
server. however, by serving the right metadata (modification dates
and/or etags) you can make that should the client decide to interact
with /articles/ again, any cached copy will be stale. the important
thing here is that servers are unaware of the existence of caches, which
are simply optimizing intermediaries. the only thing that counts is that
client/server communications are designed in a way such that those
intermediaries can do their work as effectively as possible.
cheers,
dret.
--
erik wilde | mailto:dret@... - tel:+1-510-2061079 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
On Thu, Oct 27, 2011 at 1:37 AM, Erik Wilde <dret@...> wrote: > ** > On 2011-10-26 15:41 , Jason Erickson wrote: > > Is there any way to tell a client in a response to invalidate any > > resources? (For example, I PUT to /articles/1 and I'd like to say in the > > response that the cached version of /articles/ and /articles/1/comments > > are not fresh.) > > [...] the important > thing here is that servers are unaware of the existence of caches, which > are simply optimizing intermediaries. > Exactly. The problem isn't directly restricted to PUT and invalidation, but a general invalidation issue. It's one of the three big things that CS can't get right. The other is off-by-one bugs. Jason, imagine a server and two caches (that don't know about each other, because they're on the big old internet, or in two separate company intranets). Two users work with the same origin server, but go through different caches. The caches will have different sets of resources cached, and may also invalidate resources because a POST went through. So even if a resource _only_ ever is invalidated by means of a POST or PUT, caches still have to revalidate their cached items from time to time, since the POST might not go through that particular cache. Allowing POST to invalidate more resources than the request URI only makes the invaldation issue harder, I believe. -- -mogsie-
First of all, I want to thank all the ones who contributed so far. Secondly,
I keep hearing that URLs don't matter in a RESTful system, and this is
something that I'm aware of. I am as well aware of HATEOAS and that all the
API should be 'browsable' through link relations.
However, people seem to get caught in this thing and not be able to look
into my issue as a whole. So I'm going to try my luck one more time.
Regardless of the URL itself, let's consider the next example (continued
from the example I started with):
GET {SOME_URL} - lists all the articles
GET {SOME_URL}/:ID - shows one specific article
GET {OTHER_URL} - lists all the comments
My question now is, how do you represent another resource that is related
hierarchically to the previous one? I guess that my question ultimately
resumes to: *how would you get a list resources belonging to another one in
a RESTfull system?*
To go on with my example:
Does it make more sense to have another url all together where all comments
belonging to one articles reside?
GET {YET_ANOTHER_URL}/:ARTICLE_ID - shows comments belonging to
article ARTICLE_ID
Or have a single place where all comments are, and somehow filter the ones
belonging to an article through something like a query string.
GET {OTHER_URL}?belongs_to=:ARTICLE_ID - lists all the comments that belong
to the article with the ARTICLE_ID id
Constantin TOVISI
0752 860.612
constantin.tovisi@...
On Thu, Oct 27, 2011 at 10:17 AM, Erik Mogensen <erik@mogensoft.net> wrote:
> **
>
>
>
>
> On Thu, Oct 27, 2011 at 1:37 AM, Erik Wilde <dret@...> wrote:
>
>> **
>> On 2011-10-26 15:41 , Jason Erickson wrote:
>> > Is there any way to tell a client in a response to invalidate any
>> > resources? (For example, I PUT to /articles/1 and I'd like to say in the
>> > response that the cached version of /articles/ and /articles/1/comments
>> > are not fresh.)
>>
>> [...] the important
>> thing here is that servers are unaware of the existence of caches, which
>> are simply optimizing intermediaries.
>>
>
> Exactly.
>
> The problem isn't directly restricted to PUT and invalidation, but a
> general invalidation issue. It's one of the three big things that CS can't
> get right. The other is off-by-one bugs.
>
> Jason, imagine a server and two caches (that don't know about each other,
> because they're on the big old internet, or in two separate company
> intranets). Two users work with the same origin server, but go through
> different caches. The caches will have different sets of resources cached,
> and may also invalidate resources because a POST went through. So even if a
> resource _only_ ever is invalidated by means of a POST or PUT, caches still
> have to revalidate their cached items from time to time, since the POST
> might not go through that particular cache.
>
> Allowing POST to invalidate more resources than the request URI only makes
> the invaldation issue harder, I believe.
> --
> -mogsie-
>
>
>
In HTTP there really is no way of expressing this releationship.
It's up to the hypermedia type to express such linking.
You could however, express the relationship using link-relations [1].
Consider Atom, and how it uses links to express an alternate version of an
entry.
<feed>
....
<entry>
...
<link rel="alternate" href="some-href"/>
</entry>
</feed>
We could easily extend this to allow for comments in some way. Assuming the
comments are a feed as well.
<feed>
....
<link rel="up" href="http://example.com/article/1"/>
<entry>
...
<link rel="related" href="some-article-href"/>
</entry>
</feed>
[1]: http://www.iana.org/assignments/link-relations/link-relations.xml
--
Erlend
On Thu, Oct 27, 2011 at 10:14 AM, Constantin Tovisi <
constantin.tovisi@...> wrote:
> **
>
>
> First of all, I want to thank all the ones who contributed so
> far. Secondly, I keep hearing that URLs don't matter in a RESTful system,
> and this is something that I'm aware of. I am as well aware of HATEOAS and
> that all the API should be 'browsable' through link relations.
>
> However, people seem to get caught in this thing and not be able to look
> into my issue as a whole. So I'm going to try my luck one more time.
>
> Regardless of the URL itself, let's consider the next example (continued
> from the example I started with):
>
> GET {SOME_URL} - lists all the articles
> GET {SOME_URL}/:ID - shows one specific article
>
> GET {OTHER_URL} - lists all the comments
>
>
> My question now is, how do you represent another resource that is related
> hierarchically to the previous one? I guess that my question ultimately
> resumes to: *how would you get a list resources belonging to another one
> in a RESTfull system?*
>
> To go on with my example:
>
> Does it make more sense to have another url all together where all
> comments belonging to one articles reside?
>
> GET {YET_ANOTHER_URL}/:ARTICLE_ID - shows comments belonging to
> article ARTICLE_ID
>
>
> Or have a single place where all comments are, and somehow filter the ones
> belonging to an article through something like a query string.
>
> GET {OTHER_URL}?belongs_to=:ARTICLE_ID - lists all the comments that
> belong to the article with the ARTICLE_ID id
>
> Constantin TOVISI
>
> 0752 860.612
> constantin.tovisi@gmail.com
>
>
>
>
> On Thu, Oct 27, 2011 at 10:17 AM, Erik Mogensen <erik@mogensoft.net>wrote:
>
>> **
>>
>>
>>
>>
>> On Thu, Oct 27, 2011 at 1:37 AM, Erik Wilde <dret@...> wrote:
>>
>>> **
>>> On 2011-10-26 15:41 , Jason Erickson wrote:
>>> > Is there any way to tell a client in a response to invalidate any
>>> > resources? (For example, I PUT to /articles/1 and I'd like to say in
>>> the
>>> > response that the cached version of /articles/ and /articles/1/comments
>>> > are not fresh.)
>>>
>>> [...] the important
>>> thing here is that servers are unaware of the existence of caches, which
>>> are simply optimizing intermediaries.
>>>
>>
>> Exactly.
>>
>> The problem isn't directly restricted to PUT and invalidation, but a
>> general invalidation issue. It's one of the three big things that CS can't
>> get right. The other is off-by-one bugs.
>>
>> Jason, imagine a server and two caches (that don't know about each other,
>> because they're on the big old internet, or in two separate company
>> intranets). Two users work with the same origin server, but go through
>> different caches. The caches will have different sets of resources cached,
>> and may also invalidate resources because a POST went through. So even if
>> a resource _only_ ever is invalidated by means of a POST or PUT, caches
>> still have to revalidate their cached items from time to time, since the
>> POST might not go through that particular cache.
>>
>> Allowing POST to invalidate more resources than the request URI only
>> makes the invaldation issue harder, I believe.
>> --
>> -mogsie-
>>
>>
>
>
On Thu, Oct 27, 2011 at 10:14 AM, Constantin Tovisi <
constantin.tovisi@...> wrote:
> **
> Does it make more sense to have another url all together where all comments
> belonging to one articles reside?
>
> GET {YET_ANOTHER_URL}/:ARTICLE_ID - shows comments belonging to
> article ARTICLE_ID
>
>
> Or have a single place where all comments are, and somehow filter the ones
> belonging to an article through something like a query string.
>
> GET {OTHER_URL}?belongs_to=:ARTICLE_ID - lists all the comments that belong
> to the article with the ARTICLE_ID id
>
>
As a client side developer, you shouldn't know these things, you should
discover them. In the HTML case a browser gets
<a href="/yet-another-url/4534">Comments</a>
and follows the link. This is the REST ideal, that clients don't know how
URIs are structured "a priori" but discover URIs (or their structure) at run
time. A client doesn't care if the URL happened to be
<a href="/other-url?belongs_to=4534">Comments</a>
As a server side developer, you must of course care about these things, and
choosing one over the other has more to do with style and your own sense of
longevity, e.g. what URI structure fits your scenario, current technology
stack.
If you have full control over the client and the server, you can of course
do what you choose, but then I would call it a HTTP API or even an RPC API,
since that would be a more correct description.
--
-mogsie-
Hi everyone, I wonder which, if any, best practices there are regarding the use of trailing slashes in resource URL:s in REST services? I realize and basically agree that URL:s should be opaque. But the question of how trailing slashes can/should be handled (or not) could make a REST service easier to use for a developer. It also has consequences for how a client can follow (relative) links. One could interpret: GET /books as getting a resource representing all books; basically a summary of the "books" resource and: GET /books/ as getting a list of all resources "contained" in /books, ie sub-resources, in the form of a list of URL:s to all individual book resources. But it feels like introducing out-of-band knowledge/conventions. I guess an orthodox restafarian would make no assumptions at all and simply treat "/books" and "/books/" as two completely different resources which just happen to have identifiers that differ in a single character. Comments? /Paul -- Paul Cohen www.seibostudios.se mobile: +46 730 787 035 e-mail: paul.cohen@...
Yes you can do this.
Mark and I published a draft of a mechanism (LCI) which solves this exact
problem:
http://tools.ietf.org/html/draft-nottingham-linked-cache-inv-00
Here's a blog post outlining how it works:
http://restafari.blogspot.com/2010/04/link-header-based-invalidation-of.html
Cheers,
Mike
On Wed, Oct 26, 2011 at 11:41 PM, Jason Erickson <jason@...>wrote:
>
>
> Thanks, that clears things up quite a bit for me and also shows that URL
> structure indeed does not matter. So if I *wanted* the behavior of
> invalidating sub-resources, there's no way to do it in a guaranteed way and
> the only way to do it at all would be to ask the client (in documentation)
> to explicitly revalidate sub-resources.
>
> Is there any way to tell a client in a response to invalidate any
> resources? (For example, I PUT to /articles/1 and I'd like to say in the
> response that the cached version of /articles/ and /articles/1/comments are
> not fresh.)
>
> On Oct 26, 2011, at 12:13 PM, Erik Mogensen wrote:
>
>
>
> On Wed, Oct 26, 2011 at 8:10 PM, Jason Erickson <jason@...>wrote:
>
>> I keep hearing that the way you structure you URL's doesn't really matter
>> in REST, but doesn't it affect caching? I could definitely be wrong about
>> this, if anyone would set me straight I would appreciate it.
>>
> [...]
>
>> ** POST /articles/ will invalidate the cache for /articles, /articles/1,
>> /articles/1/comments and /articles/1/comments/2
>>
> [...]
>
>> First, is this supposed to be how things work (i.e. some kind of spec)?
>>
>
> POST invalidating a resource [1] does not imply invalidating a "sub"
> resource (in the hierarchical sense of the URI Generic Syntax [2]). a POST
> invalidates the URI itself, perhaps a Location or Content-Location, but only
> identifiable resources. Imagine POSTing to the root resource ("/") of any
> server, automatically invalidating all caches. Crazy :-)
>
> URI Generic Syntax specifies that URIs use a hierarchical syntax, and that
> URIs have a hierarchical portion (path component) and a non-hierarchical
> part (query component), but URIs the primary role of an URI is to identify a
> resource.
>
> If you have a resource that is a logical hierarchy, it's tempting to re-use
> the hierarchy in the URI path component. I would ask you if you're pretty
> sure that, three years from now, or ten years from now, your resources are
> still in the same hierarchy.
>
> Second, whether there is a spec or not, is it actually how things work in
>> proxies? (I know local browser caches or iPhone cache doesn't doesn't really
>> work that way.)
>>
>
> I'm pretty sure proxies don't do this either.
>
> [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.10
> [2] http://tools.ietf.org/html/rfc3986#section-1.2.3
> --
> -mogsie-
>
>
>
>
>
>
Invalidation mechanisms are useful for gateway (reverse proxy) caching layers. If servers were completely unaware of intermediaries what would be the purpose of the s-maxage cache-control directive? Cheers, Mike On Thu, Oct 27, 2011 at 12:37 AM, Erik Wilde <dret@...> wrote: > hello. > > On 2011-10-26 15:41 , Jason Erickson wrote: >> Is there any way to tell a client in a response to invalidate any >> resources? (For example, I PUT to /articles/1 and I'd like to say in the >> response that the cached version of /articles/ and /articles/1/comments >> are not fresh.) > > nope. a cache is not something you can manipulate at will from the > server. however, by serving the right metadata (modification dates > and/or etags) you can make that should the client decide to interact > with /articles/ again, any cached copy will be stale. the important > thing here is that servers are unaware of the existence of caches, which > are simply optimizing intermediaries. the only thing that counts is that > client/server communications are designed in a way such that those > intermediaries can do their work as effectively as possible. > > cheers, > > dret. > > -- > erik wilde | mailto:dret@... - tel:+1-510-2061079 | > | UC Berkeley - School of Information (ISchool) | > | http://dret.net/netdret http://twitter.com/dret | > > > ------------------------------------ > > Yahoo! Groups Links > > > >
hello.
On 2011-10-27 07:34 , Mike Kelly wrote:
> If servers were completely unaware of intermediaries what would be the
> purpose of the s-maxage cache-control directive?
i hope i did not sound as if servers were not aware of the fact that
there can be caches. of course they are, and that's the reason why
serving things correctly is so important. but apart from the one
scenario you're mentioning (origin server and tightly coupled reverse
proxy), servers have no way to tell if there are any intermediaries in
the chain and where they might be. all they can do is rely on the fact
that if there are any, they have to play by the rules.
http://tools.ietf.org/html/draft-nottingham-linked-cache-inv-00 works
around this by assuming that origin server and cache are tightly
coupled. since it adds to HTTP, you cannot rely on it unless you can
guarantee that all intermediaries understand it. which is close to
impossible outside of closed environments, but a valid assumption in a
controlled setting.
cheers,
dret.
--
erik wilde | mailto:dret@... - tel:+1-510-2061079 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
Right, the primary use for LCI is with gateway caches - just wanted to clarify that this is actually possible with certain types of cache, which didn't seem clear in your response. Fwiw, if the mechanism was adopted by browsers in their private caches; you could also rely on invalidation, to a lesser extent, for invalidating privately cached resources on the browser too. It's not a silver bullet, but could potentially allow you some more breathing room on your expiration lengths. I also forgot to mention that the httpbis draft has a similar (but limited) invalidation mechanism via the Content-Location and Location headers: http://tools.ietf.org/html/draft-ietf-httpbis-p6-cache-16#section-2.5 Cheers, Mike On Thu, Oct 27, 2011 at 3:42 PM, Erik Wilde <dret@...> wrote: > hello. > > On 2011-10-27 07:34 , Mike Kelly wrote: >> >> If servers were completely unaware of intermediaries what would be the >> purpose of the s-maxage cache-control directive? > > i hope i did not sound as if servers were not aware of the fact that there > can be caches. of course they are, and that's the reason why serving things > correctly is so important. but apart from the one scenario you're mentioning > (origin server and tightly coupled reverse proxy), servers have no way to > tell if there are any intermediaries in the chain and where they might be. > all they can do is rely on the fact that if there are any, they have to play > by the rules. > > http://tools.ietf.org/html/draft-nottingham-linked-cache-inv-00 works around > this by assuming that origin server and cache are tightly coupled. since it > adds to HTTP, you cannot rely on it unless you can guarantee that all > intermediaries understand it. which is close to impossible outside of closed > environments, but a valid assumption in a controlled setting. > > cheers, > > dret. > > -- > erik wilde | mailto:dret@... - tel:+1-510-2061079 | > | UC Berkeley - School of Information (ISchool) | > | http://dret.net/netdret http://twitter.com/dret | >
On Thu, Oct 27, 2011 at 4:23 PM, Paul Cohen <pacoispaco@...> wrote:
> **
>
> guess an orthodox restafarian would make no assumptions at all and
> simply treat "/books" and "/books/" as two completely different
> resources which just happen to have identifiers that differ in a
> single character.
>
>
Yes ;-) due to the fact that an origin server might actually do what you
suggest, namely to return two different resources ("book summary" and "list
of books")...
Remember that relative links resolve differently from the two URIs...
--
-mogsie-
This is important enough that I added a section about it (3.3.2) to the Shoji protocol spec [1]. In my experience, trailing slashes usually lead to a better outcome because of relative referencing; it just looks better to write href='foo/' than href='base/foo' everywhere. Robert Brewer fumanchu@... [1] http://www.aminus.org/rbre/shoji/shoji-draft-02.txt > -----Original Message----- > From: rest-discuss@yahoogroups.com [mailto:rest- > discuss@yahoogroups.com] On Behalf Of Paul Cohen > Sent: Thursday, October 27, 2011 7:24 AM > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] URL:s and trailing slashes > > Hi everyone, > > I wonder which, if any, best practices there are regarding the use of > trailing slashes in resource URL:s in REST services? I realize and > basically agree that URL:s should be opaque. But the question of how > trailing slashes can/should be handled (or not) could make a REST > service easier to use for a developer. It also has consequences for > how a client can follow (relative) links. > > One could interpret: > > GET /books > > as getting a resource representing all books; basically a summary of > the "books" resource and: > > GET /books/ > > as getting a list of all resources "contained" in /books, ie > sub-resources, in the form of a list of URL:s to all individual book > resources. > > But it feels like introducing out-of-band knowledge/conventions. I > guess an orthodox restafarian would make no assumptions at all and > simply treat "/books" and "/books/" as two completely different > resources which just happen to have identifiers that differ in a > single character. > > Comments? > > /Paul > > -- > Paul Cohen > www.seibostudios.se > mobile: +46 730 787 035 > e-mail: paul.cohen@... > > > ------------------------------------ > > Yahoo! Groups Links > > >
On 2011-10-27 08:57 , Robert Brewer wrote:
> This is important enough that I added a section about it (3.3.2) to the
> Shoji protocol spec [1]. In my experience, trailing slashes usually lead
> to a better outcome because of relative referencing; it just looks
> better to write href='foo/' than href='base/foo' everywhere.
if you remember to always use "./foo", then you don't have to depend on
the difference between "base" and "base/" URIs, and it nicely represents
the fact that you're doing something that's context-dependent. serving
different content at those two URIs probably deserves to be a
anti-pattern. i usually recommend people to set up servers to accept
both (be liberal in what you accept), but to redirect to whatever they
prefer as the canonical variant (be conservative in what you do).
cheers,
dret.
--
erik wilde | mailto:dret@... - tel:+1-510-2061079 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
> Erik Wilde wrote:
> On 2011-10-27 08:57 , Robert Brewer wrote:
> > This is important enough that I added a section about it (3.3.2) to
> the
> > Shoji protocol spec [1]. In my experience, trailing slashes usually
> lead
> > to a better outcome because of relative referencing; it just looks
> > better to write href='foo/' than href='base/foo' everywhere.
>
> if you remember to always use "./foo", then you don't have to depend on
> the difference between "base" and "base/" URIs, and it nicely
> represents
> the fact that you're doing something that's context-dependent. serving
> different content at those two URIs probably deserves to be a
> anti-pattern. i usually recommend people to set up servers to accept
> both (be liberal in what you accept), but to redirect to whatever they
> prefer as the canonical variant (be conservative in what you do).
I'm not following you:
'/base' + './foo' = '/foo'
'/base/' + './foo' = '/base/foo'
Why don't I have to worry about that difference?
Agreed on the canonical redirect approach.
Robert Brewer
fumanchu@aminus.org
This would be very useful in some scenarios for private caches. The painful use case in my experience is nothing to do with federated caches and more to do with a particular client (usu. a human) doing something and then being confused when other things don't reflect that change immediately.
In my example (I PUT to /articles/1 and I'd like to say in the response that the cached version of /articles/ and /articles/1/comments are not fresh) there are a few ways of handling it:
1) Just don't allow caching /articles and /articles/1/comments
2) In out-of-band documentation, indicate that PUT, POST or DELETE to /articles/{id} will make stale (semantically) /articles and /articles/{id}/comments and the client should force revalidation on these manually to get the correct version.
3) Use the Link Cache Invalidation mechanism to indicate to the private cache that /articles and /articles/1/comments should be invalidated.
Since the private cache is implemented by a browser or a library (ostensibly) known to the client, the client can know whether it's private cache understands the Link Cache Invalidation mechanism. If, in the future, the Link Cache Invalidation mechanism is accepted and adopted, you could start to supplement option 2 (client responsibility) with option 3 (cache manager responsibility), although you couldn't replace 2 with 3 entirely. The client could tell whether it needed to take responsibility for it or not. Even if the client had to take responsibility, the documentation could indicate refer to the Link Cache Invalidation documentation and the client could use that to implement option 2.
Right?
On Oct 27, 2011, at 8:24 AM, Mike Kelly wrote:
> Fwiw, if the mechanism was adopted by browsers in their private
> caches; you could also rely on invalidation, to a lesser extent, for
> invalidating privately cached resources on the browser too. It's not a
> silver bullet, but could potentially allow you some more breathing
> room on your expiration lengths.
>
>
On Oct 11, 2011, at 9:40 AM, Jan Algermissen wrote:
> Hi,
>
> how do I decide whether a URI is bookmarkable or not?
HA! It has been in the archives all along (thanks, Roy :-)
"If the application is
designed correctly, then the only times that the user agent
will pause long enough to make a bookmark will be at one of
the application steady states, which should correspond to one
of the cool URIs. In other words, a RESTful architecture will
expose the cool URIs (and only the cool URIs) to the user."
http://tech.groups.yahoo.com/group/rest-discuss/message/13606
Jan
>
> ('Bookmarkable' meaning: 'Being an entry point into an application that is worth remembering')
>
> Some things to consider:
>
> There is a difference between the stability of a URI (whether a client can asssume a URI will
> be dereferencable in the future) and the suitability of a URI to act as an application entry point.
> For example, I'd assume HTML style sheet URIs to be pretty stable but they are not useful
> application entry points.
>
> Should a user agent remember as many URIs as possible, thereby increasing the amount of
> known application entry points and possibly avoiding re-doing certai steps through the
> application in the future (sth we do all the time when bookmarking e.g. page 4 of
> a search result).
>
> o All URIs I find in responses from a server in a link context are bookmarkable
> ('Link context' meaning Atom <link> elemnts, HTML <a> elements, Link headers,
> HTML GET-forms, etc.)
>
> o Not all URIs I find in responses from a server are bookmarkable. For example,
>
> - a URI I find in an HTML <form> element with action 'POST' is not
> - a URI I find in an Opensearch <Url> element is not
> - a URI I find in an HTML <style> element is not
>
> o What about
>
> - a URI I find in an HTML <img> element
> - a URI I find in an AtomPub <collection> element
> - a URI I find in HTTP headers such as Location, Content-Location, Alternates
> - AtomPub's edit-media links?
> - Atom <content src=""> references
>
> Does the cachability of a response affect these issues?
>
> In general, I am trying to answer the question:
>
> What are the indicators in media type (and link relation) specifications that tell
> the user agent implementor what URIs in responses of the media type in question
> can be considered bookmarkable?
>
> JAn
>
>
Hi everybody, I am one of the persons who developed the XWiki's REST API, an Opensource enterprise wiki (http://www.xwiki.org). We tried to engineer it as much as possible by following the principles and constraints of the REST architectural style. I would be glad to hear from the experts what do you think about it and, above all, what in your opinion could be improved. You can find an overview at the following address: http://platform.xwiki.org/xwiki/bin/view/Features/XWikiRESTfulAPI Thank you, Fabio
On Oct 31, 2011, at 11:35 AM, Fabio Mancinelli wrote: > Hi everybody, > > I am one of the persons who developed the XWiki's REST API, an > Opensource enterprise wiki (http://www.xwiki.org). > We tried to engineer it as much as possible by following the principles > and constraints of the REST architectural style. You surely used AtomPub, yes? It was designed for such use cases and is pretty RESTful. Jan > > I would be glad to hear from the experts what do you think about it > and, above all, what in your opinion could be improved. > > You can find an overview at the following address: > http://platform.xwiki.org/xwiki/bin/view/Features/XWikiRESTfulAPI > > Thank you, > Fabio >
On Mon, Oct 31, 2011 at 11:58 AM, Jan Algermissen <jan.algermissen@nordsc.com> wrote: > > On Oct 31, 2011, at 11:35 AM, Fabio Mancinelli wrote: > >> Hi everybody, >> >> I am one of the persons who developed the XWiki's REST API, an >> Opensource enterprise wiki (http://www.xwiki.org). >> We tried to engineer it as much as possible by following the principles >> and constraints of the REST architectural style. > > You surely used AtomPub, yes? It was designed for such use cases and is pretty RESTful. > > Jan Well... actually no :) But indeed it's a good way for handling collections. We'll try to add it in the next evolution of the API. Thanks for the hint. -Fabio > > > >> >> I would be glad to hear from the experts what do you think about it >> and, above all, what in your opinion could be improved. >> >> You can find an overview at the following address: >> http://platform.xwiki.org/xwiki/bin/view/Features/XWikiRESTfulAPI >> >> Thank you, >> Fabio >> > >
Hello, I have a real-life web service I can't see how to make an effective RESTful API for. We have a web application for system administrators that helps managing and monitoring a possibly very large number of computer machines in an organization (say 100k). Each computer registered in the system has an identifier, and I can get details about a single computer, like: GET /computers/1/temperature would return the CPU temperature of the computer with ID 1. Now, I need to have a mechanism to get the same information for an arbitrary large set of computers. So, I believe the most natural solution would be to encode the resource scoping information in the URI like this: GET /computers/1+2+3+4+5+6/temperature but of course this doesn't scale beyond a certain limit (the URI gets to large and hits the limits of the most popular HTTP servers). The alternative would be to first save your subset of computer IDs on the server by creating a new temporary resource with a PUT or a POST and then reference it in a GET. However this looks cumbersome, and leads to 2 requests when 1 would do it. Plus, if the service was meant to be read-only it wouldn't even be possible. What would be a possible RESTful approach here? Thanks, Free Ekanayaka PS: it seems that other people already have raised this issue, for example the real-life problem I described is a variation of the sample problem described here: http://blog.labix.org/2009/07/23/accessing-restful-information-efficiently
The most common way to solve this problem for HTTP is to POST the query details to the server. You can also optimize this pattern (for shared caching and future use) by storing the POSTed query details as a resource and allowing clients to then simply GET this query resource in the future. Subbu Allamaraju's "RESTful Web Services Cookbook" covers these patterns quite well (check out sections 8.3 and 8.4 in his book for details)[1] [1] http://my.safaribooksonline.com/book/web-development/web-services/9780596809140/queries/recipe-how-to-support-queries-with-large-inputs mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Mon, Oct 31, 2011 at 13:57, Free Ekanayaka <free.ekanayaka@...> wrote: > Hello, > > I have a real-life web service I can't see how to make an effective > RESTful API for. > > We have a web application for system administrators that helps managing > and monitoring a possibly very large number of computer machines in an > organization (say 100k). > > Each computer registered in the system has an identifier, and I can get > details about a single computer, like: > > GET /computers/1/temperature > > would return the CPU temperature of the computer with ID 1. > > Now, I need to have a mechanism to get the same information for an > arbitrary large set of computers. So, I believe the most natural > solution would be to encode the resource scoping information in the URI > like this: > > GET /computers/1+2+3+4+5+6/temperature > > but of course this doesn't scale beyond a certain limit (the URI gets to > large and hits the limits of the most popular HTTP servers). > > The alternative would be to first save your subset of computer IDs on > the server by creating a new temporary resource with a PUT or a POST and > then reference it in a GET. However this looks cumbersome, and leads to > 2 requests when 1 would do it. > > Plus, if the service was meant to be read-only it wouldn't even be > possible. > > What would be a possible RESTful approach here? > > Thanks, > > Free Ekanayaka > > PS: it seems that other people already have raised this issue, for > example the real-life problem I described is a variation of the sample > problem described here: > > http://blog.labix.org/2009/07/23/accessing-restful-information-efficiently > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Oct 31, 2011, at 6:57 PM, Free Ekanayaka wrote: > Hello, > > I have a real-life web service I can't see how to make an effective > RESTful API for. > > We have a web application for system administrators that helps managing > and monitoring a possibly very large number of computer machines in an > organization (say 100k). > > Each computer registered in the system has an identifier, and I can get > details about a single computer, like: > > GET /computers/1/temperature > > would return the CPU temperature of the computer with ID 1. > > Now, I need to have a mechanism to get the same information for an > arbitrary large set of computers. So, I believe the most natural > solution would be to encode the resource scoping information in the URI > like this: > > GET /computers/1+2+3+4+5+6/temperature > > but of course this doesn't scale beyond a certain limit (the URI gets to > large and hits the limits of the most popular HTTP servers). > > The alternative would be to first save your subset of computer IDs on > the server by creating a new temporary resource with a PUT or a POST and > then reference it in a GET. However this looks cumbersome, and leads to > 2 requests when 1 would do it. > > Plus, if the service was meant to be read-only it wouldn't even be > possible. > > What would be a possible RESTful approach here? FWIW, I try to leverage domain concepts to define resources that correspond to sets of things (and combine those with query params for refinement). In your case, you might want to look for machines that form sets, e.g. all-load-balancers, or all-nodes-on-floor-14, or whatever makes sense. It is pretty unlikely that you have a requirement to fetch information about a group of things without that group being somehow a domain concept, too. Jan > > Thanks, > > Free Ekanayaka > > PS: it seems that other people already have raised this issue, for > example the real-life problem I described is a variation of the sample > problem described here: > > http://blog.labix.org/2009/07/23/accessing-restful-information-efficiently >
Hi, Context: ******** I'm using XHTML for the representation of some resources, in a Restful machine-to-machine context. In particular, I'm using it for representing the "entry point" of my system, from where clients can dynamically learn how to construct URLs to other resources and start navigate the resource space. To that end I've been using XHTML <a> and <form> elements as hypermedia controls. URI templates: ************** I'm now in possession of a shinny new URI-template library (implementing http://tools.ietf.org/html/draft-gregorio-uritemplate-07) that I plan to embed in some of the clients of my system. These client programs will then be able to easily interpret (i.e., expand) URI templates. Question: ********* I'd like to use URI template as hypermedia control in XHTML, as a (richer) alternative to my existing forms. What do you think is the best way to do so ? Specifically, how would you represent an URI template in XHTML (given it must be used as a hypermedia control for programatic clients)? Thanks, Philippe Mougin
Hi guys,
I'm currently developing an HTTP interface (can't dare to call it
REST) for a few clients and I have a question for y'all.
I have a resource, /video/{id}, which contains metadata of a video (in
Atom) with a few outgoing links to different formats of the original
source file (let's say that I take, in input, an AVI, and produce
mpeg, ogv and so on).
I tried to represent the link to video files with:
https://gist.github.com/1352232
but I really don't know if the "alternate" link semantic is the right
one. Anyone has suggestions?
--
Nadalin Alessandro
www.odino.org
www.twitter.com/_odino_
On Nov 9, 2011, at 6:46 PM, Alessandro Nadalin wrote:
> Hi guys,
>
> I'm currently developing an HTTP interface (can't dare to call it
> REST) for a few clients and I have a question for y'all.
>
> I have a resource, /video/{id}, which contains metadata of a video (in
> Atom) with a few outgoing links to different formats of the original
> source file (let's say that I take, in input, an AVI, and produce
> mpeg, ogv and so on).
>
> I tried to represent the link to video files with:
> https://gist.github.com/1352232
>
> but I really don't know if the "alternate" link semantic is the right
> one. Anyone has suggestions?
I think that makes sense. Personally, I am not a friend of the type parameter (I prefer to let content negotiation handle the selection), but for your use case it seems like a good option.
You might want to look at NewsML 2 one day for inspiration. IIRC it has quite sophisticated meta data for media.
(But it is a huge, sort of complicated spec and format).
http://www.iptc.org/site/News_Exchange_Formats/NewsML-G2/
Jan
>
> --
> Nadalin Alessandro
> www.odino.org
> www.twitter.com/_odino_
>
"Philippe Mougin" wrote: > > Context: > ******** > I'm using XHTML for the representation of some resources, in a > Restful machine-to-machine context. In particular, I'm using it for > representing the "entry point" of my system, from where clients can > dynamically learn how to construct URLs to other resources and start > navigate the resource space. To that end I've been using XHTML <a> > and <form> elements as hypermedia controls. > XForms is also very useful for constructing URLs on the client. > > Question: > ********* > I'd like to use URI template as hypermedia control in XHTML, as a > (richer) alternative to my existing forms. What do you think is the > best way to do so ? Specifically, how would you represent an URI > template in XHTML (given it must be used as a hypermedia control for > programatic clients)? > Consensus will need to be reached, eventually, but I'd suggest that URI templates be integrated into (X)HTML with new attributes -- any existing attribute which takes a URI (@href, @src etc.) may take a suffix of 't' to indicate the presence of an expansion model which yields a URI (@hreft, @srct etc.). The real problem is how to indicate the allowable values for expansions, for the purpose of generating a list of possible state transitions (just like a form). I can imagine doing this in XForms, and having a re- usable library. I don't know the nature of the library you mention, but code-on-demand is just as valid for generating a list of links (even as a GET <form>). -Eric
Alessandro Nadalin wrote: > > but I really don't know if the "alternate" link semantic is the right > one. Anyone has suggestions? > Why not? Looks like long-standing best-practice to me. -Eric
On Wed, Nov 9, 2011 at 6:56 PM, Jan Algermissen <jan.algermissen@...> wrote: >> >> but I really don't know if the "alternate" link semantic is the right >> one. Anyone has suggestions? > > I think that makes sense. Personally, I am not a friend of the type parameter (I prefer to let content negotiation handle the selection), but for your use case it seems like a good option. > Uh jan, good point, I forgot to think about negotiation, since I'm used to it with "canonical" types (json/atom/xhtml...) @erik: just wondering :) > > Jan > > > >> >> -- >> Nadalin Alessandro >> www.odino.org >> www.twitter.com/_odino_ >> > > -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
Jan Algermissen wrote: > > I think that makes sense. Personally, I am not a friend of the type > parameter (I prefer to let content negotiation handle the selection), > but for your use case it seems like a good option. > Upon receiving this representation, the user-agent knows the shared URI it can use for server-driven negotiation; and is also informed of the number, nature and location of variants in order to perform client- driven negotiation (to recover from failure, etc.). I don't see how @type is ever a bad thing (unless it's wrong) -- if a user-agent only knows one of the available types, why incur the added latency of conneg on each request? While conneg is a great general-purpose tool for general-purpose clients, it gets in the way of purpose-built clients getting their work done, so it would be a shame if no @type existed to inform purpose-built clients how to bypass conneg. -Eric
On Wed, Nov 9, 2011 at 6:06 PM, Eric J. Bowman <eric@...>wrote:
> "Philippe Mougin" wrote:
> >
> > Question:
> > *********
> > I'd like to use URI template as hypermedia control in XHTML, as a
> > (richer) alternative to my existing forms. What do you think is the
> > best way to do so ? Specifically, how would you represent an URI
> > template in XHTML (given it must be used as a hypermedia control for
> > programatic clients)?
> >
>
> Consensus will need to be reached, eventually, but I'd suggest that URI
> templates be integrated into (X)HTML with new attributes -- any existing
> attribute which takes a URI (@href, @src etc.) may take a suffix of 't'
> to indicate the presence of an expansion model which yields a URI
> (@hreft, @srct etc.).
>
> The real problem is how to indicate the allowable values for expansions,
> for the purpose of generating a list of possible state transitions (just
> like a form). I can imagine doing this in XForms, and having a re-
> usable library. I don't know the nature of the library you mention, but
> code-on-demand is just as valid for generating a list of links (even as
> a GET <form>).
>
>
There's already an adequate solution to that problem - you can define the
inputs up-front against the link relation in question.
e.g. the link relation 'circle' has an href containing a URI template which
accepts the variable 'radius'
which tells you what you need to know to build an automated client which
follows this link:
<a rel="circle" href="/circle/radius;{radius}" />
Cheers,
Mike
Mike Kelly wrote:
>
> There's already an adequate solution to that problem - you can define
> the inputs up-front against the link relation in question.
>
I don't see invalid markup as an adequate solution -- @href takes a
URI, not a template; section 1.4 of the URI-template draft states that
URI templates aren't URIs. Instead of having to parse @href before
determining how to parse @href (is it a URI or a template), it makes
sense to follow the logic in the URI-template draft and provide new
attributes to match a new parsing model (or elements, in the case of
XForms elements which take URIs as content).
>
> e.g. the link relation 'circle' has an href containing a URI template
> which accepts the variable 'radius'
>
> which tells you what you need to know to build an automated client
> which follows this link:
>
> <a rel="circle" href="/circle/radius;{radius}" />
>
How? Does @href take on a different syntax due to the definition of
rel='circle'? Where is *that* in the media type definition for XHTML?
-Eric
On Wed, Nov 9, 2011 at 7:40 PM, Eric J. Bowman <eric@...> wrote:
>
> Mike Kelly wrote:
> >
> > There's already an adequate solution to that problem - you can define
> > the inputs up-front against the link relation in question.
> >
>
> I don't see invalid markup as an adequate solution -- @href takes a
> URI, not a template; section 1.4 of the URI-template draft states that
> URI templates aren't URIs. Instead of having to parse @href before
> determining how to parse @href (is it a URI or a template), it makes
> sense to follow the logic in the URI-template draft and provide new
> attributes to match a new parsing model (or elements, in the case of
> XForms elements which take URIs as content).
>
> >
> > e.g. the link relation 'circle' has an href containing a URI template
> > which accepts the variable 'radius'
> >
> > which tells you what you need to know to build an automated client
> > which follows this link:
> >
> > <a rel="circle" href="/circle/radius;{radius}" />
> >
>
> How? Does @href take on a different syntax due to the definition of
> rel='circle'? Where is *that* in the media type definition for XHTML?
>
You're focusing on a side issue. Let's change it to this for now:
<a rel="circle" hreft="/circle/radius;{radius}" />
Is that (and the description of the rel circle) not an adequate
solution to the "real problem" you mentioned before?
Cheers,
Mike
hello. On 2011-11-09 10:16 , Alessandro Nadalin wrote: > On Wed, Nov 9, 2011 at 6:56 PM, Jan Algermissen > <jan.algermissen@...> wrote: >>> but I really don't know if the "alternate" link semantic is the right >>> one. Anyone has suggestions? >> I think that makes sense. Personally, I am not a friend of the type parameter (I prefer to let content negotiation handle the selection), but for your use case it seems like a good option. > Uh jan, good point, I forgot to think about negotiation, since I'm > used to it with "canonical" types (json/atom/xhtml...) > @erik: just wondering :) agreeing with eric here: it's nice to have server-driven content negotiation and have "gateway resources" supporting it, but it's also nice to have stable and exposed URIs for specific renditions, and as long as everything is correctly labeled and interlinked, it's nicely RESTful and allows clients some choice. cheers, dret.
Mike Kelly wrote:
>
> You're focusing on a side issue. Let's change it to this for now:
>
> <a rel="circle" hreft="/circle/radius;{radius}" />
>
> Is that (and the description of the rel circle) not an adequate
> solution to the "real problem" you mentioned before?
>
No, it is not adequate. The definition of rel='circle' still doesn't
tell me what the range of allowable values is for the application
domain, or what units are used. An XForms slider control can delimit
the allowable values and declare them to be pixels, resulting in a
self-documenting API instead of an endpoint dependent upon out-of-band
knowledge.
-Eric
On Thu, Nov 10, 2011 at 12:19 AM, Eric J. Bowman <eric@...> wrote:
> Mike Kelly wrote:
>>
>> You're focusing on a side issue. Let's change it to this for now:
>>
>> <a rel="circle" hreft="/circle/radius;{radius}" />
>>
>> Is that (and the description of the rel circle) not an adequate
>> solution to the "real problem" you mentioned before?
>>
>
> No, it is not adequate. The definition of rel='circle' still doesn't
> tell me what the range of allowable values is for the application
> domain, or what units are used.
Yes, it's an incomplete example. The definition could be extended to
include that information, right?
> An XForms slider control can delimit
> the allowable values and declare them to be pixels, resulting in a
> self-documenting API instead of an endpoint dependent upon out-of-band
> knowledge.
This would be a self-documenting API to you, but not to the automated
(m2m) client you build against it. An automated client is dependent on
the knowledge you instill it with up front which, from its point of
view when interacting with the application, is out-of-band.
I'm not clear on what you mean by 'endpoint' here, either.
Cheers,
Mike
Mike Kelly wrote:
>
> >
> > No, it is not adequate. The definition of rel='circle' still
> > doesn't tell me what the range of allowable values is for the
> > application domain, or what units are used.
>
> Yes, it's an incomplete example. The definition could be extended to
> include that information, right?
>
No, not if we're talking about REST's uniform interface. A generic
description of rel='circle' could conceivably require a positive
integer, but individual application domains must still have a mechanism
to delimit the value, i.e. declare a range of 4 - 50. Declaring that
range in the definition of the link relation limits the re-use of that
link relation in other application domains, since it's application-
specific. Such application specifics belong in hypertext, not link-
relation definitions (which should strive to be generic).
>
> > An XForms slider control can delimit
> > the allowable values and declare them to be pixels, resulting in a
> > self-documenting API instead of an endpoint dependent upon
> > out-of-band knowledge.
>
> This would be a self-documenting API to you, but not to the automated
> (m2m) client you build against it.
>
It would be a self-documenting API for any user-agent which understands
the slider form control, I fail to see what bearing the nature of the
user (machine or human) has on it.
>
> An automated client is dependent on the knowledge you instill it with
> up front which, from its point of view when interacting with the
> application, is out-of-band.
>
You lost me there. REST is the antithesis of hard-coding such
information into the client code. If the range of 4 - 50 is baked into
the client code, the client can't adapt if the service modifies the
allowable range. If, however, this range is presented in a hypertext
control, updating the service range from 4 - 50 to 4 - 25 won't break
existing clients because this knowledge is presented in-band.
>
> I'm not clear on what you mean by 'endpoint' here, either.
>
Instead of the hypertext providing the user-agent with instructions as
to the allowable state transitions (links), your example provides an
endpoint where undefined values derived through out-of-band knowledge
may be used to expand {radius}. That's quite different from using
hypertext to drive application state, so I'm calling it an endpoint.
-Eric
Personally I'm a fan of the MediaRSS extension for image and video content.
Basically it's a "beefed-up" link.
http://www.rssboard.org/media-rss
<media:content media="video" url="..." width="" height="" />
Of course, your rel="alternate" also works.
Maybe even rel="enclosure" would be a suitable rel.
--
Erlend
On Wed, Nov 9, 2011 at 6:46 PM, Alessandro Nadalin <
alessandro.nadalin@...> wrote:
> **
>
>
> Hi guys,
>
> I'm currently developing an HTTP interface (can't dare to call it
> REST) for a few clients and I have a question for y'all.
>
> I have a resource, /video/{id}, which contains metadata of a video (in
> Atom) with a few outgoing links to different formats of the original
> source file (let's say that I take, in input, an AVI, and produce
> mpeg, ogv and so on).
>
> I tried to represent the link to video files with:
> https://gist.github.com/1352232
>
> but I really don't know if the "alternate" link semantic is the right
> one. Anyone has suggestions?
>
> --
> Nadalin Alessandro
> www.odino.org
> www.twitter.com/_odino_
>
>
On 09-11-11 17:39, Philippe Mougin wrote:
>
> URI templates:
> **************
> I'm now in possession of a shinny new URI-template library (implementing
> http://tools.ietf.org/html/draft-gregorio-uritemplate-07) that I plan to
care to share which one?
I've been working on a js-jquery implementation at
https://github.com/marc-portier/uri-templates/ myself
> embed in some of the clients of my system. These client programs will
> then be able to easily interpret (i.e., expand) URI templates.
>
> Question:
> *********
> I'd like to use URI template as hypermedia control in XHTML, as a
> (richer) alternative to my existing forms. What do you think is the best
> way to do so ? Specifically, how would you represent an URI template in
> XHTML (given it must be used as a hypermedia control for programatic
> clients)?
>
good question, and an interesting field for further 'standardisation'
IMHO... some first thoughts:
uri-templates are macro's producing actual uri's for individual 'contexts'
Below I assume you think about in-browser (js) code that will be
expanding the templates, right? If not: server side-expansion just
yields uri's that can be injected in the HTML: so nothing special needed.
Anyway, in terms of HTML such I think the lists of contexts could be
provided by any repeating thing like ul/li, table/tr, p
in a json-wrapped way:
<li data-context='{"id": "3367-29283-2484", "type": "item", "field":
"whatever"}'> label </li>
or a more distributed way:
<tr data-id="3367-29283-2484"><td>label</td><td
data-context-name="type">item</td><td
data-context-name="field">whatever</td></tr>
(or even think microformats, and use class attributes rather then data-x)
What remains is the need to know for each of these contexts in which
template they should be injected (to achieve what result)
So on the surrounding <ul> or <table> you could envision some indication
of those:
<ul data-uri-t="item-ref, item-form" > stating that these templates are
useful on the items nested in the ul
All that then makes sense inside some browser-state (HTML provided, HTTP
header, provided, ajax loaded or whatever) where item-ref and item-form
are known relations for 'links', something of the equivalent of
<link rel="item-ref" href="item/{type}/{id}" /><!-- GET PUT DELETE -->
<link rel="item-form" href="forms/{type}{?id}" /><!-- GET POST -->
with a self-describing pure data-structure like this one could think
about jquery code that enhances the structure with onclick-events or
even added in clickable icons to follow the uri's associated to the
various relations
as said, just some first thoughts
-marc=
> Thanks,
> Philippe Mougin
>
>
On Thu, Nov 10, 2011 at 3:02 AM, Eric J. Bowman <eric@...> wrote: > Mike Kelly wrote: >> >> > >> > No, it is not adequate. The definition of rel='circle' still >> > doesn't tell me what the range of allowable values is for the >> > application domain, or what units are used. >> >> Yes, it's an incomplete example. The definition could be extended to >> include that information, right? >> > > No, not if we're talking about REST's uniform interface. A generic > description of rel='circle' could conceivably require a positive > integer, but individual application domains must still have a mechanism > to delimit the value, i.e. declare a range of 4 - 50. Those application domains can extend and/or over-ride the description to suit their specific requirements. > Declaring that > range in the definition of the link relation limits the re-use of that > link relation in other application domains, since it's application- > specific. Right, a system-wide description of a rel will be more generic than an application-specific one, extension is re-use. > Such application specifics belong in hypertext, not link- > relation definitions (which should strive to be generic). I think this might be based on unexamined suppositions. >> >> > An XForms slider control can delimit >> > the allowable values and declare them to be pixels, resulting in a >> > self-documenting API instead of an endpoint dependent upon >> > out-of-band knowledge. >> >> This would be a self-documenting API to you, but not to the automated >> (m2m) client you build against it. >> > > It would be a self-documenting API for any user-agent which understands > the slider form control, I fail to see what bearing the nature of the > user (machine or human) has on it. To me, self-documenting means clients interact with the application by intuitively comprehending the semantics and controls they are presented with as they are proceeding through the application. So the question is; are machine clients capable of a level of intuitive comprehension to make these more complex form controls worth pursuing? I'm not sure they are. The current available writing on machine based form consumption seems to focus on the theoretical benefits without actually demonstrating the extent of change they can enable, in practice, over the out-of-band approach. The question is not whether machines can or can't interact with form-like controls, they can, it's whether doing so is worth the effort or is prohibitively complex. I guess keeping consumption of your application down by making it both theoretically 'pure' and prohibitively complex is one way of dealing with a scalability challenge. >> >> An automated client is dependent on the knowledge you instill it with >> up front which, from its point of view when interacting with the >> application, is out-of-band. >> > > You lost me there. REST is the antithesis of hard-coding such > information into the client code. Which part of the dissertation are you taking this from? > If the range of 4 - 50 is baked into > the client code, the client can't adapt if the service modifies the > allowable range. If, however, this range is presented in a hypertext > control, updating the service range from 4 - 50 to 4 - 25 won't break > existing clients because this knowledge is presented in-band. This is the wrong way of thinking about this problem. In either case, the automated client will be fed an objective which included a now-out-of-range value. The only change to the client behaviour you are making, is that the client can use the control to pre-empt a 4xx response and avoid making the request in the first place. In both cases the automated client will still have to take some pre-established action hard-coded into it. That is the nature of automated things. Cheers, Mike
We're building what will hopefully be a fairly RESTful API with hyperlinked resources and collection resources representing one to many relationships. The collection resources always contain canonical URIs for each element, but what else? Often authors of client apps would also like names, so the members of the collection can be presented in a list to the user and selected from. Then portraits of the members are requested, then arbitrary data from the full representation is requested in the collection so that N hundred HTTP requests don't have to be made for a particular feature. Is there any good research trading off the chattiness, cachability, latency and redundancy of denormalized data in RESTful collection resources? Are there good rules of thumb to apply here. It's tempting to say do the N hundred HTTP requests and come back when you can show it's a problem, but that doesn't go down well... Thanks, Jim
Hi Jim, On Nov 10, 2011, at 2:22 PM, Jim Purbrick wrote: > We're building what will hopefully be a fairly RESTful API with > hyperlinked resources and collection resources representing one to > many relationships. > > The collection resources always contain canonical URIs for each > element, but what else? > > Often authors of client apps would also like names, so the members of > the collection can be presented in a list to the user and selected > from. Then portraits of the members are requested, then arbitrary data > from the full representation is requested in the collection so that N > hundred HTTP requests don't have to be made for a particular feature. > > Is there any good research trading off the chattiness, cachability, > latency and redundancy of denormalized data in RESTful collection > resources? Are there good rules of thumb to apply here. It's tempting > to say do the N hundred HTTP requests and come back when you can show > it's a problem, but that doesn't go down well... > I think your use case description is a little, umm, dense. Can you illustrate what you are doing with example interactions? Jan > Thanks, > > Jim >
A media type like HAL[1] is designed for linking to and embedding resources via hypertext. It doesn't force you to model everything as a collection but you can definitely use it for that purpose. Cacheability will always take a hit when you introduce composite resources because their volatility is likely to be higher. You can mitigate these effects (at least for reverse proxy caches on the server side) via mechanisms like Linked Cache Invalidation[2] and Edge Side Includes[3]. Cheers, Mike [1] http://stateless.co/hal_specification.html [2] http://tools.ietf.org/html/draft-nottingham-linked-cache-inv-00 [3] http://en.wikipedia.org/wiki/Edge_Side_Includes On Thu, Nov 10, 2011 at 1:22 PM, Jim Purbrick <jimpurbrick@...> wrote: > We're building what will hopefully be a fairly RESTful API with > hyperlinked resources and collection resources representing one to > many relationships. > > The collection resources always contain canonical URIs for each > element, but what else? > > Often authors of client apps would also like names, so the members of > the collection can be presented in a list to the user and selected > from. Then portraits of the members are requested, then arbitrary data > from the full representation is requested in the collection so that N > hundred HTTP requests don't have to be made for a particular feature. > > Is there any good research trading off the chattiness, cachability, > latency and redundancy of denormalized data in RESTful collection > resources? Are there good rules of thumb to apply here. It's tempting > to say do the N hundred HTTP requests and come back when you can show > it's a problem, but that doesn't go down well... > > Thanks, > > Jim > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Jim: I don't have any research results (interesting area....), but will pass along my own experience and personal preferences in case they give you some helpful ideas. In cases where some type of "composite" view is needed by clients, I prefer to do this work on the server and present a single "resource" to clients (long lists would support paging, filtering, etc.). By doing the "mashup" on the server, there are more opportunities to optimize the experience in the future (the server can change storage models, object models, re-arrange code, move operations to other servers, etc. all w/o adversely affecting the client). Also, by setting up an expectation that clients will "get what they need" in a single call, you can lead server implementations down the path of publicizing a resource model that reflects the actual domain-specific needs of the client-server interaction instead of publicizing a resource model based on the server-side data storage or coding object models. This does a better job of separating concerns, too. Finally, since the HTTP protocol has a rich set of caching controls, much of the "cost" of chunky messages (and the effort to compose them on the server) can be mitigated w/ a cache directives sent along with the response. Even composite resources that experience heavy editing will do well in this cache/chunky environment by adding etags to the cache controls. On a related note, Jon Moore's 2010 presentation at Oredev[1] shows an approach that allows servers to implement resource messages that can be either chunky or chatty and allows clients to sort details out on the fly. A rather interesting approach since it allows implementations to safely "experiment" w/ optimizing the interactions "in real time." Hope this helps. MCA [1] http://oredev.org/2010/sessions/hypermedia-apis mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Thu, Nov 10, 2011 at 08:22, Jim Purbrick <jimpurbrick@...> wrote: > We're building what will hopefully be a fairly RESTful API with > hyperlinked resources and collection resources representing one to > many relationships. > > The collection resources always contain canonical URIs for each > element, but what else? > > Often authors of client apps would also like names, so the members of > the collection can be presented in a list to the user and selected > from. Then portraits of the members are requested, then arbitrary data > from the full representation is requested in the collection so that N > hundred HTTP requests don't have to be made for a particular feature. > > Is there any good research trading off the chattiness, cachability, > latency and redundancy of denormalized data in RESTful collection > resources? Are there good rules of thumb to apply here. It's tempting > to say do the N hundred HTTP requests and come back when you can show > it's a problem, but that doesn't go down well... > > Thanks, > > Jim > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Marc, thanks for answering. > > I'm now in possession of a shinny new URI-template library (implementing > > http://tools.ietf.org/html/draft-gregorio-uritemplate-07) that I plan to > > care to share which one? It's a Java library developed by a colleague. It isn't publicly available (yet). > Below I assume you think about in-browser (js) code that will be > expanding the templates, right? In my context, this isn't necessarily in-browser code, or js code (more likely Java, C# or Perl code). But yes, I 'm talking about client side expansion of URI templates communicated by the server. > Anyway, in terms of HTML such I think the lists of contexts could be > provided by any repeating thing like ul/li, table/tr, p Interesting. This resonates with the problem Eric wrote about: dynamically passing a declarative description of constraints on allowed values for expansion. In your example, you communicate a list of possible contexts for expansion. A context is a set of value for performing a given expansion. Do I understand correctly? If so, I'm not sure why you don't directly pass to the client a list of URI generated server-side by expanding the "templates" using the various contexts? I'm certainly missing some pieces here! Philippe
Eric, Mike,
Thanks for the insightful remarks.
Dynamically communicating constraints on allowable values for expansions in a declarative way is sure an interesting problem.
For m2m interactions, one challenge is that the more stuff we try to communicate dynamically, the more client programs become difficult to implement. So we probably all have to find a good tradeoff depending on our particular contexts. For example, in my current project I have considered using XForms for my hypermedia controls, but it would have been too much complex for the teams in charge of implementing clients. I settled for HTML forms, which are less powerful but easier to use.
To get back to URI templates, now that I can provide tooling to some of the client developers, they could become a worthwhile addition.
Putting aside the problem of dynamically communicating allowable values for expansion (let's progress one step at a time), I like your idea of <a rel="circle" hreft="/circle/radius;{radius}" />. As the hreft attribute isn't standard XHTML we would have to get it from our own namespace. This would give: <a rel="circle" my:hreft="/circle/radius;{radius}" />
However it looks like it breaks the HTML 5 specification which states: "The target, rel, media, hreflang, and type attributes must be omitted if the href attribute is not present.
We could mint a new rel attribute in our own namespace, which would give: <a my:rel="circle" my:hreft="/circle/radius;{radius}" />
At that point however, I wonder if using <a> still makes much sense. What do you think?
An alternative is to create a new element. If we name it "link", an XHTML representation containing this element would look like:
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:my="http://www.example.com/my">
<body>
<my:link rel="circle" hreft="/circle/radius;{radius}" />
</body>
</html>
What is your take on this? Can we do better?
Philippe
--- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote:
>
> On Wed, Nov 9, 2011 at 7:40 PM, Eric J. Bowman <eric@...> wrote:
> >
> > Mike Kelly wrote:
> > >
> > > There's already an adequate solution to that problem - you can define
> > > the inputs up-front against the link relation in question.
> > >
> >
> > I don't see invalid markup as an adequate solution -- @href takes a
> > URI, not a template; section 1.4 of the URI-template draft states that
> > URI templates aren't URIs. Instead of having to parse @href before
> > determining how to parse @href (is it a URI or a template), it makes
> > sense to follow the logic in the URI-template draft and provide new
> > attributes to match a new parsing model (or elements, in the case of
> > XForms elements which take URIs as content).
> >
> > >
> > > e.g. the link relation 'circle' has an href containing a URI template
> > > which accepts the variable 'radius'
> > >
> > > which tells you what you need to know to build an automated client
> > > which follows this link:
> > >
> > > <a rel="circle" href="/circle/radius;{radius}" />
> > >
> >
> > How? Does @href take on a different syntax due to the definition of
> > rel='circle'? Where is *that* in the media type definition for XHTML?
> >
>
> You're focusing on a side issue. Let's change it to this for now:
>
> <a rel="circle" hreft="/circle/radius;{radius}" />
>
> Is that (and the description of the rel circle) not an adequate
> solution to the "real problem" you mentioned before?
>
> Cheers,
> Mike
>
hello philippe.
> Dynamically communicating constraints on allowable values for expansions in a declarative way is sure an interesting problem.
indeed it is. have you considered dependencies in your scenario? one use
case we have all of the time is the "GET initial form; GET list of
countries, wait for country to be filled out; GET list of states; wait
for state to be filled out; GET list of cities; pick city; PUT/POST
form" pattern, in all sorts and shapes. i am wondering whether there is
some consensus about how to deal with this as declaratively as possible,
or whether anybody is just happily scripting all of this.
cheers,
dret.
--
erik wilde | mailto:dret@... - tel:+1-510-2061079 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
Erik: in one recent case, a client decided to handle these UI list dependencies by sending all the data in the form and letting local scripts to the filtering w/o the need to call back to the server. in this case the lists were quite static (product-related filters) and relatively small (tens of items in each list, not hundreds). caching makes composing and shipping this representation to the clients relatively inexpensive (after the first delivery to a client). mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Thu, Nov 10, 2011 at 12:35, Erik Wilde <dret@...> wrote: > hello philippe. > >> Dynamically communicating constraints on allowable values for expansions in a declarative way is sure an interesting problem. > > indeed it is. have you considered dependencies in your scenario? one use > case we have all of the time is the "GET initial form; GET list of > countries, wait for country to be filled out; GET list of states; wait > for state to be filled out; GET list of cities; pick city; PUT/POST > form" pattern, in all sorts and shapes. i am wondering whether there is > some consensus about how to deal with this as declaratively as possible, > or whether anybody is just happily scripting all of this. > > cheers, > > dret. > > -- > erik wilde | mailto:dret@... - tel:+1-510-2061079 | > | UC Berkeley - School of Information (ISchool) | > | http://dret.net/netdret http://twitter.com/dret | > > > ------------------------------------ > > Yahoo! Groups Links > > > >
hello
On 2011-11-10 6:09 , mike amundsen wrote:
> In cases where some type of "composite" view is needed by clients, I
> prefer to do this work on the server and present a single "resource"
> to clients (long lists would support paging, filtering, etc.). By
> doing the "mashup" on the server, there are more opportunities to
> optimize the experience in the future (the server can change storage
> models, object models, re-arrange code, move operations to other
> servers, etc. all w/o adversely affecting the client).
you're talking about feeds here, or their logical equivalent, including
the options for embedding or referencing entries, right? you can feed
those lists, you know the work we've been doing on feeds as query result
serializations. or am i missing something that you are doing that does
fit this pattern?
> Also, by setting up an expectation that clients will "get what they
> need" in a single call, you can lead server implementations down the
> path of publicizing a resource model that reflects the actual
> domain-specific needs of the client-server interaction instead of
> publicizing a resource model based on the server-side data storage or
> coding object models. This does a better job of separating concerns,
> too.
so, another question i've been pondering is the following: it's clear to
me that we must have collection resources that provide aggregate views
of potentially included resources. but how much control should we give
the client over what we return? that impacts cacheability, but for us is
pretty much the only way we can make certain scenarios work. could we
add query parameters so that client can control what to include in the
collection resource, or can we allow clients to configure "view"
resources which define these aspects and then are being referred to in
requests? these view resources could control things like the following
aspects:
- collection paging, let's say 20 per page.
- inlining or linking entry resources
- included attributes of the entries (we have many many attribiutes per
resource and most clients only need very few of them)
so what i wondering about is whether our "feed queries" work could be
augmented with some "feed views" work. i think i would lean towards the
model where the feed view configuration would be a self-describing
resource itself, but generally speaking, i am wondering whether this is
the model you have in mind or have already implemented.
cheers,
dret.
--
erik wilde | mailto:dret@... - tel:+1-510-2061079 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
hello mike.
> in one recent case, a client decided to handle these UI list
> dependencies by sending all the data in the form and letting local
> scripts to the filtering w/o the need to call back to the server. in
> this case the lists were quite static (product-related filters) and
> relatively small (tens of items in each list, not hundreds).
yes, for small pick lists, including the allowable values is an option
and is a good option to avoid additional requests.
> caching
> makes composing and shipping this representation to the clients
> relatively inexpensive (after the first delivery to a client).
the problem is that dependent lists can get huge; imagine the scenario i
used which, when fully expanded, lists all cities in all countries
world-wide. clearly, that's not something you want to ship to the
client, so there needs to be some iterative process. you could probably
say that this is very similar to faceted navigation, only that the
process in this case is driven by form fields, and not by search-based
interactions. we have tons of scenarios where we must have value-assist
as a service, because these things also can be computed by server-side
logic, so they don't even exist without input parameters provided by the
client.
cheers,
dret.
--
erik wilde | mailto:dret@... - tel:+1-510-2061079 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
Is there a consensus? An absolute URL is, I guess, easier to consume by a client, whereas a relative link is also going to need some sort of (equivalent to HTML) BASE tag so that the client can build the URL to hit. On the other hand I quite like the idea of a relative URL... for example one could take a set of representations and - with only a tiny bit of mangling of the BASE tag within them - replay them, eg for performance/load testing - in some other environment. Thoughts? Dan
Dan Haywood wrote: > > Is there a consensus? > Yes, that URI allocation scheme design is off-topic to REST. ;-) Pragmatically speaking, decent URIs make it easier to develop and maintain RESTful systems to the point where your question is not off- topic to rest-discuss, though... > > An absolute URL is, I guess, easier to consume by a client, whereas a > relative link is also going to need some sort of (equivalent to HTML) > BASE tag so that the client can build the URL to hit. > You don't *need* a <base> tag, in its absence URLs are relative to the current representation's URI. > > On the other hand I quite like the idea of a relative URL... for > example one could take a set of representations and - with only a > tiny bit of mangling of the BASE tag within them - replay them, eg > for performance/load testing - in some other environment. > True, that. Taking that thought a bit further, the real power is that you can write relative algorithms to generate navigational links, which work regardless of how deep in a hierarchy a page is, without caring about the path; instead of parsing or calculating redundant path info, as is required to generate absolute URIs. This can significantly reduce latency at the origin server on cache misses, etc. This is what led me to my stub-file approach for browser-resident XSLT REST applications -- if the URI allocation scheme is algorithmic and relative URIs are used, a significant number of stub files site-wide become identical and can share Etags. A little logic on the server can bypass generating stub files for whole swathes of resources, in favor of serving a cached representation. On the client, anything which hastens initiation of the cached, compiled transformations is a big user-perceived performance win. Sometimes, there are fringe REST benefits to rational URI allocation scheme design and relative URLs. -Eric
On 10-11-11 15:40, Philippe Mougin wrote: > Marc, thanks for answering. > > > > I'm now in possession of a shinny new URI-template library > (implementing > > > http://tools.ietf.org/html/draft-gregorio-uritemplate-07) that I > plan to > > > > care to share which one? > > It's a Java library developed by a colleague. It isn't publicly > available (yet). > > > Below I assume you think about in-browser (js) code that will be > > expanding the templates, right? > > In my context, this isn't necessarily in-browser code, or js code (more > likely Java, C# or Perl code). But yes, I 'm talking about client side > expansion of URI templates communicated by the server. > ah ok, since you mentioned XHTML I just assumed browser-client and expanded further to assume javascript, sorry for that > > Anyway, in terms of HTML such I think the lists of contexts could be > > provided by any repeating thing like ul/li, table/tr, p > > Interesting. This resonates with the problem Eric wrote about: > dynamically passing a declarative description of constraints on allowed > values for expansion. In your example, you communicate a list of > possible contexts for expansion. A context is a set of value for > performing a given expansion. Do I understand correctly? If so, I'm not > sure why you don't directly pass to the client a list of URI generated > server-side by expanding the "templates" using the various contexts? I'm Well, that surely *is* an option as I mentioned myself, but if you go this path, then you don't need to provide the client with uri-templates nor contexts, and you don't need anything else then the xhtml we already have... So I just assumed you meant this, if not it seemed to make your question void? > certainly missing some pieces here! > Not sure, maybe I'm missing the pieces, but here is my line of thinking: if you think about sending uri-templates to the client, then I assume you want it doing the expansions, and thus, that client will need to have a way to access, derive, produce the contexts for that... So really, in my mind relating this issue to XHTML (as per your subject) really opens up two interesting issues: [1] a way to send the uri-templates (link/ref, or the suggested hreft, srct variants) ... or ... [2] and a way to send the contexts regards, -marc= > Philippe > >
> > Interesting. This resonates with the problem Eric wrote about:
> > dynamically passing a declarative description of constraints on allowed
> > values for expansion. In your example, you communicate a list of
> > possible contexts for expansion. A context is a set of value for
> > performing a given expansion. Do I understand correctly? If so, I'm not
> > sure why you don't directly pass to the client a list of URI generated
> > server-side by expanding the "templates" using the various contexts? I'm
>
> Well, that surely *is* an option as I mentioned myself, but if you go
> this path, then you don't need to provide the client with uri-templates
> nor contexts, and you don't need anything else then the xhtml we already
> have...
> So I just assumed you meant this, if not it seemed to make your question
> void?
Yes, I need client side expansion, as I want to use URI templates to dynamically communicate how to build certain URIs. However, my clients are purpose-built programs whose developers will be given some informations out of band. For example, I might communicate http://example.com/stockquote/{symbol} dynamically, associated with a given rel value for finding it in a representation, but specify in my service documentation (i.e., out of band) that it must be expanded using a variable named "symbol" whose value must be the stock symbol of the company you want a stock quote for.
So my main question is what you labeled as "question [1]": what is the best way to embed such an URI template in XHTML.
One viable option is creating a specific "link" element with a rel and hreft attribute, which would give:
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:my="http://example.com/my">
<body>
<my:link rel="http://example.com/rels/stockquote" hreft="http://example.com/stockquote/{symbol}" />
</body>
</html>
Another idea I have is to use the existing xrd's link element, which has support for URI templates (xrd is specified at http://docs.oasis-open.org/xri/xrd/v1.0/xrd-1.0.html).
This would give:
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:xrd="http://docs.oasis-open.org/ns/xri/xrd-1.0">
<body>
<xrd:link rel="http://example.com/rels/stockquote" template="http://example.com/stockquote/{symbol}" />
</body>
</html>
I'd be grateful for any feedback on these as well as other ideas on how to best use URI templates as hypermedia controls in XHTML.
Philippe
PM:
<puts-on-his-client-developer-hat>
the choice between adding a custom LINK element to XHTML or one based
on XRD is not too interesting; i suspect they would both need
essentially the same development effort in order to recognize, parse,
and process in representations.
</puts-on-his-client-developer-hat>
<puts-on-media-type-designer-hat>
when implementing these "templates" (whether via HTML FORM, URI
Template, or some other method), i include the possible "fields" that
would appear in a template as documentation. this is totally
out-of-band data, of course. and that data must be "incorporated" by
server and client devs in some way. it's basically a dictionary of
possible data points that both client and server need to keep in mind
as possible values passed in transitions (templates) or steady-states
(response representations). what/how implementors handle this is up to
each case.
</puts-on-media-type-designer-hat>
hopefully that helps.
MCA
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
On Tue, Nov 15, 2011 at 09:31, Philippe Mougin <pmougin@...> wrote:
>
>
>
>
>
>> > Interesting. This resonates with the problem Eric wrote about:
>> > dynamically passing a declarative description of constraints on allowed
>> > values for expansion. In your example, you communicate a list of
>> > possible contexts for expansion. A context is a set of value for
>> > performing a given expansion. Do I understand correctly? If so, I'm not
>> > sure why you don't directly pass to the client a list of URI generated
>> > server-side by expanding the "templates" using the various contexts? I'm
>>
>> Well, that surely *is* an option as I mentioned myself, but if you go
>> this path, then you don't need to provide the client with uri-templates
>> nor contexts, and you don't need anything else then the xhtml we already
>> have...
>> So I just assumed you meant this, if not it seemed to make your question
>> void?
>
> Yes, I need client side expansion, as I want to use URI templates to dynamically communicate how to build certain URIs. However, my clients are purpose-built programs whose developers will be given some informations out of band. For example, I might communicate http://example.com/stockquote/{symbol} dynamically, associated with a given rel value for finding it in a representation, but specify in my service documentation (i.e., out of band) that it must be expanded using a variable named "symbol" whose value must be the stock symbol of the company you want a stock quote for.
>
> So my main question is what you labeled as "question [1]": what is the best way to embed such an URI template in XHTML.
>
> One viable option is creating a specific "link" element with a rel and hreft attribute, which would give:
>
> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:my="http://example.com/my">
> <body>
> <my:link rel="http://example.com/rels/stockquote" hreft="http://example.com/stockquote/{symbol}" />
> </body>
> </html>
>
> Another idea I have is to use the existing xrd's link element, which has support for URI templates (xrd is specified at http://docs.oasis-open.org/xri/xrd/v1.0/xrd-1.0.html).
>
> This would give:
>
> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:xrd="http://docs.oasis-open.org/ns/xri/xrd-1.0">
> <body>
> <xrd:link rel="http://example.com/rels/stockquote" template="http://example.com/stockquote/{symbol}" />
> </body>
> </html>
>
> I'd be grateful for any feedback on these as well as other ideas on how to best use URI templates as hypermedia controls in XHTML.
>
> Philippe
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
"Philippe Mougin" wrote: > > Yes, I need client side expansion, as I want to use URI templates to > dynamically communicate how to build certain URIs. However, my > clients are purpose-built programs whose developers will be given > some informations out of band. > Surely this information can be written down and linked to in-band? Hard-coded clients aren't such a bad thing if, at least, the shared understandings are linked to (as opposed to baked-in / out-of-band). > > So my main question is what you labeled as "question [1]": what is > the best way to embed such an URI template in XHTML. > OK, I think I see what you're after, this is rough but ought to make everybody happy (except that it's bad HTML5), and is what is meant by a self-documenting API... http://charger.bisonsystems.net/stooges.txt ...because the code is simpler than prosaic documentation, even though the URIs are hypothetical and the XSLT, nonexistent. Obviously, either XSLT or Javascript can be used to generate a UI from that information, with the option of output in XHTML (whole pages) or JSON (snippets). Using URI templates as an outright replacement for links and forms in XHTML, decouples both the choice of forms language and the method of URI generation. The resulting XHTML (relying on pre-HTML5 <object> syntax in more ways than one) is svelte, and reduces the need to look deeper into the code to figure out how to build URLs in a way that's uniform across approaches / coding-language choices. RDFa is used to declare that the in-scope (domain-specific) vocabulary represents nicknames of people, the terms and meanings of the out-of- scope (shared understanding) vocabulary are left to the externally- linked document. The link is what counts, but @valuetype gives us the semantics needed to do name-value expansion for a template passed in @data, a nice repurposing of obsolete code IMNSHO. No, you don't have to use the XSLT (or JS), or use the service in a browser, but that option not only makes it easier for others to learn your service (particularly if your markup is valid and accessible), it also provides the opportunity to delimit which options from the shared vocabulary aren't supported locally (graying out a list item); the benefit of XSLT/XForms over JS here is one of declarative code vs. code- on-demand (visibility). I basically just had my aha moment for URI templates in XHTML, as it moves UI design choices (like forms language) to another layer. Interesting! -Eric
Am 27.10.2011 21:01, schrieb Jason Erickson: 3) Use the Link Cache > Invalidation mechanism to indicate to the private cache that > /articles and /articles/1/comments should be invalidated. > > Since the private cache is implemented by a browser or a library > (ostensibly) known to the client, the client can know whether it's > private cache understands the Link Cache Invalidation mechanism. If, > in the future, the Link Cache Invalidation mechanism is accepted and > adopted, you could start to supplement option 2 (client > responsibility) with option 3 (cache manager responsibility), > although you couldn't replace 2 with 3 entirely. The client could > tell whether it needed to take responsibility for it or not. Even if > the client had to take responsibility, the documentation could > indicate refer to the Link Cache Invalidation documentation and the > client could use that to implement option 2. In a browser environment you can access the reponse headers of XMLHttpRequest reponses. Thus you can implement LIC there. Depending on you use case this might be a valid approach. Philipp Meier -- 404 signature not found
Hi, I have a list of tasks resource (PATH: /tasks). What rules of REST will break if I made /tasks return different result for different sign in users (It could be /users/1/tasks)? What is the disadvantage of that? Thanks. Best regards, Zhi-Qiang Lei zhiqiang.lei@...
Zhi-Qiang Lei wrote: > > I have a list of tasks resource (PATH: /tasks). What rules of REST > will break if I made /tasks return different result for different > sign in users (It could be /users/1/tasks)? What is the disadvantage > of that? Thanks. > None, and almost none. Set Cache-Control to private, personalized content doesn't intermediary-cache well; if you have default content for guest users, those representations may be set to public and will intermediary-cache well. -Eric
Is it a good rule of thumb to say that any resource that needs to be uniquely identified by clients should be given a unique URI? I'm thinking of a use case in this example where user1 may be given access to the tasks of user2. In that case, the server can provide a link to user1 that may look like this. <link rel="user2tasks" href="/users/2/tasks"/> Is it still possible to design this scenario without having a unique URI to identify a users task collection resource? Thanks, Vishi On Fri, Nov 25, 2011 at 1:06 PM, Eric J. Bowman <eric@...>wrote: > ** > > > Zhi-Qiang Lei wrote: > > > > I have a list of tasks resource (PATH: /tasks). What rules of REST > > will break if I made /tasks return different result for different > > sign in users (It could be /users/1/tasks)? What is the disadvantage > > of that? Thanks. > > > > None, and almost none. Set Cache-Control to private, personalized > content doesn't intermediary-cache well; if you have default content > for guest users, those representations may be set to public and will > intermediary-cache well. > > -Eric > > >
Am 25.11.2011 15:21, schrieb Viswanath Durbha: > Is it a good rule of thumb to say that any resource that needs to be > uniquely identified by clients should be given a unique URI? > > I'm thinking of a use case in this example where user1 may be given > access to the tasks of user2. In that case, the server can provide a > link to user1 that may look like this. > > <link rel="user2tasks" href="/users/2/tasks"/> That is the exact reason why I tend to use different resources for user specific content. > Is it still possible to design this scenario without having a unique URI > to identify a users task collection resource? I use content-negotiation and a redirect, i.e. http://example.com/tasks for user1 redirects to http://example.com/tasks?user=user1 and any user can access this resource, given it is authorized to do so. -billy.
Viswanath Durbha wrote: > > Is it a good rule of thumb to say that any resource that needs to be > uniquely identified by clients should be given a unique URI? > I think it's more important to understand the identification of resources constraint. There's nothing wrong with defining a resource which is personalized based on authentication credentials. Whatever URI you map to this resource must always map to this resource, IOW its meaning must not change over time. Forget HTTP for a minute... Imagine a resource defined as "your local weather forecast" based on protocol-layer geolocation. The resource itself is dynamic, but the mapping to its URI isn't -- the URI always means the same thing, even though the content is constantly updated, and different users get different results (imagine loading that resource in a mobile device while driving cross-country). Those results may vary even further by login credentials, imagine the local weather service with different payment plans where premium users get more of something. I could mint like a gazillion URIs to account for all that, or I could inform intermediaries of what's going on. In HTTP with experimental geolocation support, the header which makes it run off of one URI would be "Vary: Authorization, Geolocation". Unauthenticated, or basic service level customers, receive publicly- cached representations; premium users get a cacheable redirect to their account (OK, more than one URI, but not infinitely more), which is privately cacheable and issues an authentication challenge. If a premium user isn't logged in, they'll get the basic service. If they are logged in, they'll still load the same publicly-cached data as any user in the same area. Any way you slice it, every user is retrieving a representation of the same resource. Regardless of the dissimilarities in content, they're still equivalents, since every user gets exactly what they expect and what they paid for when they dereference the URI -- the local weather forecast for their current location. > > I'm thinking of a use case in this example where user1 may be given > access to the tasks of user2. In that case, the server can provide a > link to user1 that may look like this. > > <link rel="user2tasks" href="/users/2/tasks"/> > > Is it still possible to design this scenario without having a unique > URI to identify a users task collection resource? > You've fallen right into the trap of thinking about REST in terms of URI allocation-scheme design. By first deciding to personalize the URLs instead of personalizing the representations, you now have a problem you wouldn't have otherwise had. Design in terms of resources, *then* assign URIs to those resources. Most importantly, think of auth as a server configuration problem, not a hypertext problem... You're saying you have a resource, and you want to restrict access to a group containing two users. My response is, why are you trying to do this with URIs and hypertext? Use the protocol layer, and you will have the flexibility to redefine users/groups/privileges on a whim, without changing anyone's login, or having to cache-expire the hypertext to push new links and URIs out to clients (or set up redirects for the now-obsolete API). -Eric
Philipp Meier wrote: > > Am 25.11.2011 15:21, schrieb Viswanath Durbha: > > > Is it a good rule of thumb to say that any resource that needs to be > > uniquely identified by clients should be given a unique URI? > > > > I'm thinking of a use case in this example where user1 may be given > > access to the tasks of user2. In that case, the server can provide a > > link to user1 that may look like this. > > > > <link rel="user2tasks" href="/users/2/tasks"/> > > That is the exact reason why I tend to use different resources for > user specific content. > Which is fine, but you still have a resource /tasks which gives each user a unique representation from a set of redirects. There's no right way / wrong way, besides, 200 and 301 are both valid variant responses to requests for the same URI. Just two tools in a box, each has its use, but in REST it's a bikeshed color and some other protocol may be used that's completely different from HTTP. This consideration shouldn't come into play until after the resources are designed, though, IMO -- don't let an implementation preference affect the design of the system. -Eric > > > Is it still possible to design this scenario without having a > > unique URI to identify a users task collection resource? > > I use content-negotiation and a redirect, i.e. > > http://example.com/tasks > > for user1 redirects to > > http://example.com/tasks?user=user1 > > and any user can access this resource, given it is authorized to do > so. > > -billy. > >
Several times i came to the question, wheter to specify a relation with some kind of index to distinct links of the same relationship type. examples: GET /products/canon <category name="canon" ...> <link href="..." rel="http://example.com/item#1" .../> <link href="..." rel="http://example.com/item#2" .../> <link href="..." rel="http://example.com/item#3" .../> </category> With this, one could give same relation-types different weights, an order/index or different nuances. Is it worth thinking about? Would it break something? What are the arguments ´bout this topic around here? What are alternatives?
using HTML as an example... when servers i write want to pass the identity of an item in a response representation, i use @id (unique) or @name (non-unique) attributes the client apps i write use the @rel & @class attributes (both non-unique multi-valued fields) as a semantic identifier (i.e. what this _means_). i follow the same general rule for any custom designs (XML, JSON, etc.) that i create. mixing identity and meaning into the same attribute is, IMO, not a good idea. esp. in the long-term. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Tue, Nov 29, 2011 at 17:54, Jakob Strauch <jakob.strauch@...> wrote: > Several times i came to the question, wheter to specify a relation with > some kind of index to distinct links of the same relationship type. > > examples: > > GET /products/canon > <category name="canon" ...> > <link href="..." rel="http://example.com/item#1" .../> > <link href="..." rel="http://example.com/item#2" .../> > <link href="..." rel="http://example.com/item#3" .../> > </category> > > > With this, one could give same relation-types different weights, an > order/index or different nuances. > > Is it worth thinking about? Would it break something? What are the > arguments ´bout this topic around here? What are alternatives? > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
WS-REST 2012 - Call for Papers
Abstract Submission Deadline: *3. February 2012*
http://ws-rest.org/2012/cfp
The Third International Workshop on RESTful Design (WS-REST 2012) aims
to provide a forum for discussion and dissemination of research on the
emerging resource-oriented style of Web service design.
Background
Over the past years, several discussions between advocates of the two
major architectural styles for designing and implementing Web services
(the RPC/ESB-oriented approach and the resource-oriented approach) have
been mainly held outside of the traditional research and academic
community. Mailing lists, forums and developer communities have seen
long and fascinating debates around the assumptions, strengths, and
weaknesses of these two approaches. The RESTful approach to Web services
has also received a significant amount of attention from industry as
indicated by the numerous technical books being published on the topic.
This third edition of WS-REST, co-located with the WWW2012 conference
<http://wwwconference.org/www2012/>, aims at providing an academic forum
for discussing current emerging research topics centered around the
application of REST, as well as advanced application scenarios for
building large scale distributed systems.
In addition to presentations on novel applications of RESTful Web
services technologies, the workshop program will also include
discussions on the limits of the applicability of the REST architectural
style, as well as recent advances in research that aim at tackling new
problems that may require to extend the basic REST architectural style.
The organizers are seeking novel and original, high quality paper
submissions on research contributions focusing on the following topics:
* Applications of the REST architectural style to novel domains
* Design Patterns and Anti-Patterns for RESTful services
* RESTful service composition
* Testing RESTful services (methods and frameworks)
* Inverted REST (REST for push events)
* Integration of Pub/Sub with REST
* Performance and QoS Evaluations of RESTful services
* REST compliant transaction models
* Mashups
* Frameworks and toolkits for RESTful service implementation
* Frameworks and toolkits for RESTful service consumption
* Modeling RESTful services
* Resource Design and Granularity
* Evolution of RESTful services
* Versioning and Extension of REST APIs
* HTTP extensions and replacements
* REST compliant protocols beyond HTTP
* Multi-Protocol REST (REST architectures across protocols)
All workshop papers are peer-reviewed and accepted papers will be
published as part of the ACM Digital Library. Two kinds of contributions
are sought: short position papers (not to exceed 4 pages in ACM style
format) describing particular challenges or experiences relevant to the
scope of the workshop, and full research papers (not to exceed 8 pages
in the ACM style format) describing novel solutions to relevant
problems. Technology demonstrations are particularly welcome, and we
encourage authors to focus on "lessons learned" rather than describing
an implementation.
Original papers, not undergoing review elsewhere, must be submitted
electronically in PDF format.
Easychair page: http://www.easychair.org/conferences/?conf=wsrest2012
Important Dates
* Abstract Submission: 3. February 2012
* Paper Submission: 10. February 2012
* Notification of Acceptance: 8. March 2012
* WS-REST 2012 Workshop: 16. April 2012
Program Committee Chairs
* Cesare Pautasso <http://www.pautasso.info/>, Faculty of Informatics,
USI Lugano, Switzerland
* Erik Wilde <http://dret.net/netdret/>, EMC, USA
* Rosa Alarcon <http://dcc.puc.cl/gente/usuarios/ralarcon>, Computer
Science Department, Pontificia Universidad de Chile, Chile
Program Committee
* Jan Algermissen, Nord Software Consulting, Germany
* Subbu Allamaraju, Yahoo Inc., USA
* Mike Amudsen <http://www.amundsen.com/>, USA
* Bill Burke, Red Hat, USA
* Benjamin Carlyle <http://soundadvice.id.au/blog/>, Australia
* Stuart Charlton <http://stucharlton.com/blog>, Elastra, USA
* Duncan Cragg <http://duncan-cragg.org/blog/>, Thoughtworks, UK
* Cornelia Davis, EMC, USA
* Joe Gregorio <http://bitworking.org>, Google, USA
* Michael Hausenblas <http://sw-app.org/about.html>, DERI, Ireland
* Rohit Khare, 4K Associates, USA
* Yves Lafon, W3C, USA
* Frank Leymann
<http://www.iaas.uni-stuttgart.de/institut/mitarbeiter/leymann/indexE.php>,
University of Stuttgart, Germany
* Alexandros Marinos, Rulemotion, UK
* Ian Robinson <http://iansrobinson.com/>, Thoughtworks, UK
* Sam Ruby, IBM, USA
* Richard Taylor, UC Irvine, USA
* Stefan Tilkov <http://www.innoq.com/blog/st/>, innoQ, Germany
* Steve Vinoski <http://steve.vinoski.net/>, Verivue, USA
* Olaf Zimmermann <http://www.zurich.ibm.com/%7Eolz/>, IBM Zurich
Research Lab, Switzerland
Contact
WS-REST Web site: http://ws-rest.org/2012/
WS-REST Twitter: http://twitter.com/wsrest2012
WS-REST Email: ws-rest@...
<mailto:ws-rest@...>
Hi, While we can use REST for CRUD operations, I wonder how it can be used for maintenance operations such as Backup/Restore and Upgrade. These are time consuming operations and some of us our team members are suggesting to use SOAP for these. Have two questions. 1. Can we use REST for these maintenance operations? If so, can you let me know pointers of examples? 2. If SOAP is better fit, is that OK to hybrid web services implementation supporting both REST and SOAP? Wouldn't this confuse/complicate client developers? Thanks, -rama
rama: HTTP has good support for dealing with long-running operations. upon receiving a client's request to start a long-running operation, the server can return 202 (Accepted) along w/ a URI that points to a resource that represents the progress of the work. this response can also include hints on how long to wait before hitting this URI. this is a shots, simple request/response interaction that need not leave any open connection between client and server. *** REQUEST POST /long-running-jobs/ .... *** RESPONSE 202 Accepted ... <a href="..." rel="progress">check on progress</a> each request to this returned URI could show progress information and, eventually the details of success or failure. success may include a pointer to the _final_ URI of the completed work. you can use this pattern to create logs and other audit information about the long-running process. you can expose a single resource that lists all the outstanding long-running processes; filters out the failures, etc. etc. i work w/ a number of clients that use this pattern for handling requests to do post-processing work on uploaded data. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Wed, Nov 30, 2011 at 21:16, Ramamoorthy Subramanian <ramsub4@...>wrote: > Hi, > > While we can use REST for CRUD operations, I wonder how it can be used for > maintenance operations such as Backup/Restore and Upgrade. These are time > consuming operations and some of us our team members are suggesting to use > SOAP > for these. Have two questions. > > 1. Can we use REST for these maintenance operations? If so, can you let me > know > pointers of examples? > 2. If SOAP is better fit, is that OK to hybrid web services implementation > supporting both REST and SOAP? Wouldn't this confuse/complicate client > developers? > > Thanks, > > -rama > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Well, i think i meant something different: I want to express a relation ship like "the fifth element", no matter which entity it links to. Orders of a customer: <link href="/orders/2011/january/123" id="123" rel="http://example/order#1"/> <link href="/orders/2011/june/987" id="987" rel="http://example/order#2"/> There are already some relationship types for a small subset of this kind of relations: "first" and "last". I ran into this questions while enhancing Darell Miller´s HAL serializer with the latest WCF Web API bits... --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > using HTML as an example... > > when servers i write want to pass the identity of an item in a response > representation, i use @id (unique) or @name (non-unique) attributes > > the client apps i write use the @rel & @class attributes (both non-unique > multi-valued fields) as a semantic identifier (i.e. what this _means_). > > i follow the same general rule for any custom designs (XML, JSON, etc.) > that i create. > > mixing identity and meaning into the same attribute is, IMO, not a good > idea. esp. in the long-term. > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > > > On Tue, Nov 29, 2011 at 17:54, Jakob Strauch <jakob.strauch@...> wrote: > > > Several times i came to the question, wheter to specify a relation with > > some kind of index to distinct links of the same relationship type. > > > > examples: > > > > GET /products/canon > > <category name="canon" ...> > > <link href="..." rel="http://example.com/item#1" .../> > > <link href="..." rel="http://example.com/item#2" .../> > > <link href="..." rel="http://example.com/item#3" .../> > > </category> > > > > > > With this, one could give same relation-types different weights, an > > order/index or different nuances. > > > > Is it worth thinking about? Would it break something? What are the > > arguments ´bout this topic around here? What are alternatives? > > > > > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > >
i would not use the @rel value as a sort key, either. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Thu, Dec 1, 2011 at 02:10, Jakob Strauch <jakob.strauch@...> wrote: > Well, i think i meant something different: I want to express a relation > ship like "the fifth element", no matter which entity it links to. > > Orders of a customer: > > <link href="/orders/2011/january/123" id="123" rel="http://example/order#1 > "/> > <link href="/orders/2011/june/987" id="987" rel="http://example/order#2"/> > > There are already some relationship types for a small subset of this kind > of relations: "first" and "last". > > I ran into this questions while enhancing Darell Miller´s HAL serializer > with the latest WCF Web API bits... > > > > --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: > > > > using HTML as an example... > > > > when servers i write want to pass the identity of an item in a response > > representation, i use @id (unique) or @name (non-unique) attributes > > > > the client apps i write use the @rel & @class attributes (both non-unique > > multi-valued fields) as a semantic identifier (i.e. what this _means_). > > > > i follow the same general rule for any custom designs (XML, JSON, etc.) > > that i create. > > > > mixing identity and meaning into the same attribute is, IMO, not a good > > idea. esp. in the long-term. > > > > mca > > http://amundsen.com/blog/ > > http://twitter.com@mamund > > http://mamund.com/foaf.rdf#me > > > > > > > > > > On Tue, Nov 29, 2011 at 17:54, Jakob Strauch <jakob.strauch@...> wrote: > > > > > Several times i came to the question, wheter to specify a relation with > > > some kind of index to distinct links of the same relationship type. > > > > > > examples: > > > > > > GET /products/canon > > > <category name="canon" ...> > > > <link href="..." rel="http://example.com/item#1" .../> > > > <link href="..." rel="http://example.com/item#2" .../> > > > <link href="..." rel="http://example.com/item#3" .../> > > > </category> > > > > > > > > > With this, one could give same relation-types different weights, an > > > order/index or different nuances. > > > > > > Is it worth thinking about? Would it break something? What are the > > > arguments ´bout this topic around here? What are alternatives? > > > > > > > > > > > > > > > > > > ------------------------------------ > > > > > > Yahoo! Groups Links > > > > > > > > > > > > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
the xml variant for HAL has @name for handling this already. It's meant to be used as a secondary key to @rel. Mike - the problem with using @id (unique in the whole document) is that it undermines any other intended control data on the link such as @rel - since it's likely your clients will overlook it completely. Cheers, Mike On Thu, Dec 1, 2011 at 9:51 AM, mike amundsen <mamund@...> wrote: > > > i would not use the @rel value as a sort key, either. > > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > > > On Thu, Dec 1, 2011 at 02:10, Jakob Strauch <jakob.strauch@...> wrote: > >> Well, i think i meant something different: I want to express a relation >> ship like "the fifth element", no matter which entity it links to. >> >> Orders of a customer: >> >> <link href="/orders/2011/january/123" id="123" rel=" >> http://example/order#1"/> >> <link href="/orders/2011/june/987" id="987" rel="http://example/order#2 >> "/> >> >> There are already some relationship types for a small subset of this kind >> of relations: "first" and "last". >> >> I ran into this questions while enhancing Darell Miller´s HAL serializer >> with the latest WCF Web API bits... >> >> >> >> --- In rest-discuss@yahoogroups.com, mike amundsen <mamund@...> wrote: >> > >> > using HTML as an example... >> > >> > when servers i write want to pass the identity of an item in a response >> > representation, i use @id (unique) or @name (non-unique) attributes >> > >> > the client apps i write use the @rel & @class attributes (both >> non-unique >> > multi-valued fields) as a semantic identifier (i.e. what this _means_). >> > >> > i follow the same general rule for any custom designs (XML, JSON, etc.) >> > that i create. >> > >> > mixing identity and meaning into the same attribute is, IMO, not a good >> > idea. esp. in the long-term. >> > >> > mca >> > http://amundsen.com/blog/ >> > http://twitter.com@mamund >> > http://mamund.com/foaf.rdf#me >> > >> > >> > >> > >> > On Tue, Nov 29, 2011 at 17:54, Jakob Strauch <jakob.strauch@...> wrote: >> > >> > > Several times i came to the question, wheter to specify a relation >> with >> > > some kind of index to distinct links of the same relationship type. >> > > >> > > examples: >> > > >> > > GET /products/canon >> > > <category name="canon" ...> >> > > <link href="..." rel="http://example.com/item#1" .../> >> > > <link href="..." rel="http://example.com/item#2" .../> >> > > <link href="..." rel="http://example.com/item#3" .../> >> > > </category> >> > > >> > > >> > > With this, one could give same relation-types different weights, an >> > > order/index or different nuances. >> > > >> > > Is it worth thinking about? Would it break something? What are the >> > > arguments ´bout this topic around here? What are alternatives? >> > > >> > > >> > > >> > > >> > > >> > > ------------------------------------ >> > > >> > > Yahoo! Groups Links >> > > >> > > >> > > >> > > >> > >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > > >
Hope this isn't off-topic: Is anyone using something like WADL for service definition documentation? We're (hopefully) going to have a collection of many services in multiple domains and we'll be expected to have a standard 'template' for defining/documenting a service. I'm curious if anyone in an "enterprise" situation has run into that and has any advice. Thanks! Carlos
hello carlos.
On 2011-11-30 21:08 , Carlos Eberhardt wrote:
> Is anyone using something like WADL for service definition documentation? We're (hopefully) going to have a collection of many services in multiple domains and we'll be expected to have a standard 'template' for defining/documenting a service. I'm curious if anyone in an "enterprise" situation has run into that and has any advice.
definitely not off-topic! this is a problem that a lot of people are
struggling with (including us right now). at times, there is some
general pushback to "describe" things because that might encourage
developers to hardcode things, especially things such as URI patterns
that they shouldn't hardcode. so "descriptions" putting URI patterns on
top always look a little unfortunate, in the end these things probably
have been designed for generating the server side, and not so much for
guiding client-side developers. but in the end, if you have services,
you want internal/external people to find them, in particular if you're
serious about SOA and have a lot of them [1].
there have been individual efforts to come up with a language, off the
top of my head in addition to WADL i can list our own approach ReLL [2],
RESTdesc [3], and SA-REST [4]. they all take a little different
approaches and all of them have been discussed controversially. in the
end, the main question is what you need the registry for, and what
things you cannot so by runtime discovery. we're currently struggling
with that, too, and i am pretty confident that we want to have some
developer resources made accessible somewhere, such as
tools/examples/contacts/schemas. in the end, my guess is it comes down
to documenting representations and link relations, and this can be done
by namespacing all these things and then using a mechanism for
describing namespaces [5]. i think there still is room in this space for
something to become established and easily usable so that it is useful
across SOA domains, but as long as things are (services to their
descriptions) linked, it's nicely RESTful even if everybody does what
fits their needs best.
cheers,
dret.
[1] http://news.ycombinator.com/item?id=3101876
[2] http://dret.net/netdret/publications#ala10a
[3] http://restdesc.org/
[4] http://www.w3.org/Submission/SA-REST/
[5] http://dret.net/netdret/publications#wil06h
--
erik wilde | mailto:dret@... - tel:+1-510-2061079 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
My approach to this problem (documenting Web services) is to document the Media Type that expresses the domain semantics for that service. This can be done for custom-designed media types[1] designed for a single application domain and for existing (domain-agnostic) media types that support adding semantic meaning using "decorators" (@id, @name, @class, @rel)[2]. The advantage of this approach (for the work I do) is that both servers and clients can use he same document; even work on development efforts independently, asynchronously. This works no matter the OS/Platform/Framework, etc. This approach also supports extending the implementation to support changes in the domain model w/ relatively little additional effort since adding new features is often done by adding new representations of data in the existing media type (i.e. no changes in the media type). And when changes in the media type are required, it can almost always be done w/o breaking any existing implementations (client or server). The downside of this approach is that both client and server need to understand enough of the application protocol (HTTP) to be able to implement successful components based on media type documentation. This kind of work is not "automatically" supported in any framework I know of today. That means some of the early work on a project takes more effort than just pointing an editor at a known URI and waiting for that editor to generate connector code for a set of exposed URIs w/ arguments. I cover the details of this approach in a recently released book[3]. [1] http://amundsen.com/media-types/maze/ [2] http://amundsen.com/hypermedia/profiles/ [3] http://shop.oreilly.com/product/0636920020530.do mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Thu, Dec 1, 2011 at 11:26, Erik Wilde <dret@...> wrote: > hello carlos. > > On 2011-11-30 21:08 , Carlos Eberhardt wrote: > > Is anyone using something like WADL for service definition > documentation? We're (hopefully) going to have a collection of many > services in multiple domains and we'll be expected to have a standard > 'template' for defining/documenting a service. I'm curious if anyone in an > "enterprise" situation has run into that and has any advice. > > definitely not off-topic! this is a problem that a lot of people are > struggling with (including us right now). at times, there is some > general pushback to "describe" things because that might encourage > developers to hardcode things, especially things such as URI patterns > that they shouldn't hardcode. so "descriptions" putting URI patterns on > top always look a little unfortunate, in the end these things probably > have been designed for generating the server side, and not so much for > guiding client-side developers. but in the end, if you have services, > you want internal/external people to find them, in particular if you're > serious about SOA and have a lot of them [1]. > > there have been individual efforts to come up with a language, off the > top of my head in addition to WADL i can list our own approach ReLL [2], > RESTdesc [3], and SA-REST [4]. they all take a little different > approaches and all of them have been discussed controversially. in the > end, the main question is what you need the registry for, and what > things you cannot so by runtime discovery. we're currently struggling > with that, too, and i am pretty confident that we want to have some > developer resources made accessible somewhere, such as > tools/examples/contacts/schemas. in the end, my guess is it comes down > to documenting representations and link relations, and this can be done > by namespacing all these things and then using a mechanism for > describing namespaces [5]. i think there still is room in this space for > something to become established and easily usable so that it is useful > across SOA domains, but as long as things are (services to their > descriptions) linked, it's nicely RESTful even if everybody does what > fits their needs best. > > cheers, > > dret. > > [1] http://news.ycombinator.com/item?id=3101876 > [2] http://dret.net/netdret/publications#ala10a > [3] http://restdesc.org/ > [4] http://www.w3.org/Submission/SA-REST/ > [5] http://dret.net/netdret/publications#wil06h > > -- > erik wilde | mailto:dret@... - tel:+1-510-2061079 | > | UC Berkeley - School of Information (ISchool) | > | http://dret.net/netdret http://twitter.com/dret | > > > ------------------------------------ > > Yahoo! Groups Links > > > >
--- In rest-discuss@yahoogroups.com, "Carlos Eberhardt" <carlos.eberhardt@...> wrote: > > Hope this isn't off-topic: > > Is anyone using something like WADL for service definition documentation? We're (hopefully) going to have a collection of many services in multiple domains and we'll be expected to have a standard 'template' for defining/documenting a service. I'm curious if anyone in an "enterprise" situation has run into that and has any advice. > Hi Carlos, If your intent is to document REST services for people that might develop clients of those services, WADL isn't an option as it would lead to unrestful coupling. One reason is that WADL specifies the URIs (or URI templates) for your resources. While communicating such information between services and clients dynamically at run-time is fine, communicating it at design time will lead clients to hardcode it, which will make them break when it changes. This is probably the biggest unrestful documentation pattern I see around these days for Web API that are supposedly RESTful. When documenting my own REST services I've found a lot of interesting inspiration in the documentation for the SUN Cloud API at http://kenai.com/projects/suncloudapis/pages/Home (still, this doc isn't perfect as it over-specifies a few things). There is also this quote from Roy Fielding that is useful when planning for documentation: "A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state, or in defining extended relation names and/or hypertext-enabled mark-up for existing standard media types. Any effort spent describing what methods to use on what URIs of interest should be entirely defined within the scope of the processing rules for a media type (and, in most cases, already defined by existing media types)". (cf http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven). Best, Philippe
Thanks Mike. This idea makes perfect sense. Btw, is it a good idea doing asynchronous notification for such a long-running operation? This way, the client need not poll for the completion of operation. Thanks. -rama ________________________________ From: mike amundsen <mamund@...> To: Ramamoorthy Subramanian <ramsub4@...> Cc: rest-discuss@yahoogroups.com Sent: Thu, December 1, 2011 8:33:56 AM Subject: Re: [rest-discuss] Defining REST for maintenance operations rama: HTTP has good support for dealing with long-running operations. upon receiving a client's request to start a long-running operation, the server can return 202 (Accepted) along w/ a URI that points to a resource that represents the progress of the work. this response can also include hints on how long to wait before hitting this URI. this is a shots, simple request/response interaction that need not leave any open connection between client and server. *** REQUEST POST /long-running-jobs/ .... *** RESPONSE 202 Accepted ... <a href="..." rel="progress">check on progress</a> each request to this returned URI could show progress information and, eventually the details of success or failure. success may include a pointer to the _final_ URI of the completed work. you can use this pattern to create logs and other audit information about the long-running process. you can expose a single resource that lists all the outstanding long-running processes; filters out the failures, etc. etc. i work w/ a number of clients that use this pattern for handling requests to do post-processing work on uploaded data. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Wed, Nov 30, 2011 at 21:16, Ramamoorthy Subramanian <ramsub4@...> wrote: Hi, > >While we can use REST for CRUD operations, I wonder how it can be used for >maintenance operations such as Backup/Restore and Upgrade. These are time >consuming operations and some of us our team members are suggesting to use SOAP >for these. Have two questions. > >1. Can we use REST for these maintenance operations? If so, can you let me know >pointers of examples? >2. If SOAP is better fit, is that OK to hybrid web services implementation >supporting both REST and SOAP? Wouldn't this confuse/complicate client >developers? > >Thanks, > >-rama > > > >------------------------------------ > >Yahoo! Groups Links > > > >
Rama: since HTTP is a Client-Server protocol, it's most effective when you allow clients to initiate conversations. There are a number of ways to "optimize" your use case: - for HTML, rely on the meta-refresh pattern and adjust polling times dynamically. - for generic clients, use an explicit cache value to prevent clients from hitting the origin server - servers can emit the Retry-After header with a timespan Also, if it becomes an issue, servers can block responses from clients that don't honor the settings listed above in order to prevent "bad actors" from flooding the system There are callback libraries that work over HTTP using long-poling, etc. but i do not find they are reliable, scalable, or any more effective than the suggestions here. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Thu, Dec 1, 2011 at 13:00, Ramamoorthy Subramanian <ramsub4@...>wrote: > Thanks Mike. This idea makes perfect sense. > > Btw, is it a good idea doing asynchronous notification for such a > long-running operation? This way, the client need not poll for the > completion of operation. Thanks. > > -rama > ------------------------------ > *From:* mike amundsen <mamund@...> > *To:* Ramamoorthy Subramanian <ramsub4@...> > *Cc:* rest-discuss@yahoogroups.com > *Sent:* Thu, December 1, 2011 8:33:56 AM > *Subject:* Re: [rest-discuss] Defining REST for maintenance operations > > > > rama: > > HTTP has good support for dealing with long-running operations. upon > receiving a client's request to start a long-running operation, the server > can return 202 (Accepted) along w/ a URI that points to a resource that > represents the progress of the work. this response can also include hints > on how long to wait before hitting this URI. this is a shots, simple > request/response interaction that need not leave any open connection > between client and server. > > *** REQUEST > POST /long-running-jobs/ > .... > > *** RESPONSE > 202 Accepted > ... > <a href="..." rel="progress">check on progress</a> > > each request to this returned URI could show progress information and, > eventually the details of success or failure. success may include a pointer > to the _final_ URI of the completed work. you can use this pattern to > create logs and other audit information about the long-running process. you > can expose a single resource that lists all the outstanding long-running > processes; filters out the failures, etc. etc. > > i work w/ a number of clients that use this pattern for handling requests > to do post-processing work on uploaded data. > > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > > > On Wed, Nov 30, 2011 at 21:16, Ramamoorthy Subramanian <ramsub4@...>wrote: > >> Hi, >> >> While we can use REST for CRUD operations, I wonder how it can be used for >> maintenance operations such as Backup/Restore and Upgrade. These are time >> consuming operations and some of us our team members are suggesting to >> use SOAP >> for these. Have two questions. >> >> 1. Can we use REST for these maintenance operations? If so, can you let >> me know >> pointers of examples? >> 2. If SOAP is better fit, is that OK to hybrid web services implementation >> supporting both REST and SOAP? Wouldn't this confuse/complicate client >> developers? >> >> Thanks, >> >> -rama >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >> >> > >
Thanks Mike. You mean, technologies such as Comet wont be reliable? -rama ________________________________ From: mike amundsen <mamund@...> To: Ramamoorthy Subramanian <ramsub4@...> Cc: rest-discuss@yahoogroups.com Sent: Thu, December 1, 2011 11:36:20 PM Subject: Re: [rest-discuss] Defining REST for maintenance operations Rama: since HTTP is a Client-Server protocol, it's most effective when you allow clients to initiate conversations. There are a number of ways to "optimize" your use case: - for HTML, rely on the meta-refresh pattern and adjust polling times dynamically. - for generic clients, use an explicit cache value to prevent clients from hitting the origin server - servers can emit the Retry-After header with a timespan Also, if it becomes an issue, servers can block responses from clients that don't honor the settings listed above in order to prevent "bad actors" from flooding the system There are callback libraries that work over HTTP using long-poling, etc. but i do not find they are reliable, scalable, or any more effective than the suggestions here. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Thu, Dec 1, 2011 at 13:00, Ramamoorthy Subramanian <ramsub4@...> wrote: Thanks Mike. This idea makes perfect sense. > >Btw, is it a good idea doing asynchronous notification for such a long-running >operation? This way, the client need not poll for the completion of operation. >Thanks. > > > >-rama > > ________________________________ From: mike amundsen <mamund@...> >To: Ramamoorthy Subramanian <ramsub4@...> >Cc: rest-discuss@yahoogroups.com >Sent: Thu, December 1, 2011 8:33:56 AM >Subject: Re: [rest-discuss] Defining REST for maintenance operations > > >rama: > > >HTTP has good support for dealing with long-running operations. upon receiving a >client's request to start a long-running operation, the server can return 202 >(Accepted) along w/ a URI that points to a resource that represents the >progress of the work. this response can also include hints on how long to wait >before hitting this URI. this is a shots, simple request/response interaction >that need not leave any open connection between client and server. > > >*** REQUEST >POST /long-running-jobs/ >.... > > >*** RESPONSE >202 Accepted >... ><a href="..." rel="progress">check on progress</a> > > >each request to this returned URI could show progress information and, >eventually the details of success or failure. success may include a pointer to >the _final_ URI of the completed work. you can use this pattern to create logs >and other audit information about the long-running process. you can expose a >single resource that lists all the outstanding long-running processes; filters >out the failures, etc. etc. > > >i work w/ a number of clients that use this pattern for handling requests to do >post-processing work on uploaded data. > > >mca >http://amundsen.com/blog/ >http://twitter.com@mamund >http://mamund.com/foaf.rdf#me > > > > > >On Wed, Nov 30, 2011 at 21:16, Ramamoorthy Subramanian <ramsub4@...> >wrote: > >Hi, >> >>While we can use REST for CRUD operations, I wonder how it can be used for >>maintenance operations such as Backup/Restore and Upgrade. These are time >>consuming operations and some of us our team members are suggesting to use SOAP >>for these. Have two questions. >> >>1. Can we use REST for these maintenance operations? If so, can you let me know >>pointers of examples? >>2. If SOAP is better fit, is that OK to hybrid web services implementation >>supporting both REST and SOAP? Wouldn't this confuse/complicate client >>developers? >> >>Thanks, >> >>-rama >> >> >> >>------------------------------------ >> >>Yahoo! Groups Links >> >> >> >> > >
i meant to say i don't find using Comet (and other similar approaches) any more reliable, scale-able, effective than HTTP's Client-Server centered options. This is a personal choice based on my own experience. Others on this list may have experience w/ Comet, etc. that they can share. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Thu, Dec 1, 2011 at 13:08, Ramamoorthy Subramanian <ramsub4@...>wrote: > > > Thanks Mike. You mean, technologies such as Comet wont be reliable? > > -rama > > ------------------------------ > *From:* mike amundsen <mamund@...> > *To:* Ramamoorthy Subramanian <ramsub4@...> > *Cc:* rest-discuss@yahoogroups.com > *Sent:* Thu, December 1, 2011 11:36:20 PM > > *Subject:* Re: [rest-discuss] Defining REST for maintenance operations > > Rama: > > since HTTP is a Client-Server protocol, it's most effective when you allow > clients to initiate conversations. There are a number of ways to "optimize" > your use case: > - for HTML, rely on the meta-refresh pattern and adjust polling times > dynamically. > - for generic clients, use an explicit cache value to prevent clients from > hitting the origin server > - servers can emit the Retry-After header with a timespan > > Also, if it becomes an issue, servers can block responses from clients > that don't honor the settings listed above in order to prevent "bad actors" > from flooding the system > > There are callback libraries that work over HTTP using long-poling, etc. > but i do not find they are reliable, scalable, or any more effective than > the suggestions here. > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > > > On Thu, Dec 1, 2011 at 13:00, Ramamoorthy Subramanian <ramsub4@...>wrote: > >> Thanks Mike. This idea makes perfect sense. >> >> Btw, is it a good idea doing asynchronous notification for such a >> long-running operation? This way, the client need not poll for the >> completion of operation. Thanks. >> >> -rama >> ------------------------------ >> *From:* mike amundsen <mamund@...> >> *To:* Ramamoorthy Subramanian <ramsub4@...> >> *Cc:* rest-discuss@yahoogroups.com >> *Sent:* Thu, December 1, 2011 8:33:56 AM >> *Subject:* Re: [rest-discuss] Defining REST for maintenance operations >> >> >> >> rama: >> >> HTTP has good support for dealing with long-running operations. upon >> receiving a client's request to start a long-running operation, the server >> can return 202 (Accepted) along w/ a URI that points to a resource that >> represents the progress of the work. this response can also include hints >> on how long to wait before hitting this URI. this is a shots, simple >> request/response interaction that need not leave any open connection >> between client and server. >> >> *** REQUEST >> POST /long-running-jobs/ >> .... >> >> *** RESPONSE >> 202 Accepted >> ... >> <a href="..." rel="progress">check on progress</a> >> >> each request to this returned URI could show progress information and, >> eventually the details of success or failure. success may include a pointer >> to the _final_ URI of the completed work. you can use this pattern to >> create logs and other audit information about the long-running process. you >> can expose a single resource that lists all the outstanding long-running >> processes; filters out the failures, etc. etc. >> >> i work w/ a number of clients that use this pattern for handling requests >> to do post-processing work on uploaded data. >> >> >> mca >> http://amundsen.com/blog/ >> http://twitter.com@mamund >> http://mamund.com/foaf.rdf#me >> >> >> >> >> On Wed, Nov 30, 2011 at 21:16, Ramamoorthy Subramanian <ramsub4@... >> > wrote: >> >>> Hi, >>> >>> While we can use REST for CRUD operations, I wonder how it can be used >>> for >>> maintenance operations such as Backup/Restore and Upgrade. These are time >>> consuming operations and some of us our team members are suggesting to >>> use SOAP >>> for these. Have two questions. >>> >>> 1. Can we use REST for these maintenance operations? If so, can you let >>> me know >>> pointers of examples? >>> 2. If SOAP is better fit, is that OK to hybrid web services >>> implementation >>> supporting both REST and SOAP? Wouldn't this confuse/complicate client >>> developers? >>> >>> Thanks, >>> >>> -rama >>> >>> >>> >>> ------------------------------------ >>> >>> Yahoo! Groups Links >>> >>> >>> >>> >> > > >
hello rama.
On 2011-12-01 10:00 , Ramamoorthy Subramanian wrote:
> Btw, is it a good idea doing asynchronous notification for such a
> long-running operation? This way, the client need not poll for the
> completion of operation. Thanks.
http://dret.net/netdret/publications#pau11b (to be presented next week
at ICSOC by cesare pautasso) might give you some ideas about how you
could do that in a RESTful way. basically, the idea is to allow clients
to subscribe to updates about change notifications.
kind regards,
dret.
--
erik wilde | mailto:dret@... - tel:+1-510-2061079 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
Hi all, I'm trying to understand what query param names I should choose for lists of resources. Was looking at: http://www.opensearch.org/Specifications/OpenSearch/1.1/Draft_5#OpenSearch_1.1_parameters so: searchTerms count startIndex startPage Need to think what I've missed when I write my tests. What do you do? This is for listing/searching telephone call records. Thanks.
one major aspect of using hypermedia is loose coupling and evolvability. but where are the boundaries? which server-side changes may/may not affect hypermedia-aware clients? for example: as a human user, i can easely adapt to a change in a web shop system, e.g. when the order procedure include suddenly an additional step like providing an optional voucher code. a hypermedia client can be easely redirected. but i´m sure the agent can´t do something useful there. at least not unless he gets tought how to deal with the new domain concept (e.g. a new media type) - which means updating to a new version. so which aspect can be really decoupled? as far as i can see, there are "only" technical details like internal structures, urls. Speaking of evolvability, i think most changes to a growing API are more domain-related than technical-related. If my assumptions are correct, is it maybe more important to develop hypermedia clients, which can be updated by hot-deploy mechanisms?
Jakob: i've been doing some work in this area (evolvability for hypermedia-based systems) and, while my experiments are still not completed, i can pass along some observations that might give you some ideas. first, IMO, you are correct to state that most all the "evolvability" is due to changes in the problem-domain. IOW, not the protocol (HTTP) and not the message formats (media types). since REST focuses on sharing understanding through response representations that contain hypermedia to advance application flow, the focus of evolvability is (in my work) on the media type and the response representations. the important task of writing hypermedia applications is mapping the problem domain details to elements in the media type. IOW, to evolve the system to match changes in the problem domain, you modify the representations and the hypermedia within those representations. so, with that as a basis... there are two different cases to consider: Human-driven user-agents (or Human-to-Machine - H2M) and, Machine-driven user-agents (or Machine-to-Machine - M2M). H2M evolvability for hypermedia in this case the "human" driving the user agent (UA) has "knowledge in the head" that the user agent does not have. the UA can focus just on recognizing, parsing, and rendering the media type representations and allowing the human to interpret the results and make choices based on the human's knowledge of the problem domain and the hypermedia affordances (links and forms) presented. since the act of mapping intention (what i want to get done) to action (the links and forms available) is all handled by a human, servers are free to make quite a wide range of changes and the system will still function well. for example, in H2M cases, the server is free to add/remove inputs elements in forms, add/remove links, change the "order" in which links/forms are presented, even introduce entirely new forms and inputs. All these things are not likely to "break" the system since the human can be reasonably expected to "know" the problem domain (or a similar domain) enough to make decisions along the way. M2M evolvability for hypermedia in this case there is no human *directly* driving the interactions between client and server. the UA is a 'bot' and has only the "knowledge in the code" to work with. This knowledge has to be "put" there by some human, of course. for this scenario, the server has a much more limited set of evolvability options. severs can remove inputs, remove links/forms, and/or change the order of their appearance and still expect the system to "work properly." IOW, the server cannot add any new inputs, links, or forms and expect the 'bot' to "know" or "understand" these new elements. FWIW, i think there are a number of ways to improve the M2M case, but i am not yet prepared to talk about that since i have not made much progress yet in this area. i hope this gives you some ideas on how to tackle this problem and would be interested in other POVs and observations on this topic. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Fri, Dec 2, 2011 at 06:48, Jakob Strauch <jakob.strauch@...> wrote: > one major aspect of using hypermedia is loose coupling and evolvability. > but where are the boundaries? which server-side changes may/may not affect > hypermedia-aware clients? > > for example: as a human user, i can easely adapt to a change in a web shop > system, e.g. when the order procedure include suddenly an additional step > like providing an optional voucher code. > > a hypermedia client can be easely redirected. but i´m sure the agent can´t > do something useful there. at least not unless he gets tought how to deal > with the new domain concept (e.g. a new media type) - which means updating > to a new version. > > so which aspect can be really decoupled? as far as i can see, there are > "only" technical details like internal structures, urls. Speaking of > evolvability, i think most changes to a growing API are more domain-related > than technical-related. > > If my assumptions are correct, is it maybe more important to develop > hypermedia clients, which can be updated by hot-deploy mechanisms? > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
Mike,
I think I get the gist of what you are saying, but I still struggle understanding the various aspects of writing a good rest api as well as consuming one. In your example of M2M, if I am a developer writing a web application with a UI front end that users visit/log in/etc, and I want to provide them with Facebook login to access the site.. am I now considered the M2M in this equation.. in that I will be writing code on my web app to interface with facebook api? What continually confuses me is the idea of trying to remain HATEOAS compliant as I write my own API, and consuming an API as a developer. What I mean is, as the API developer, I am trying to provide a HATEOAS compliant API.. one that returns response with links that MUST be followed by the consumer, and only those links. But as of now, I still have to provide documentation that explains to a developer the possible links that can be returned from each resource. For example, my API provides access
control to a degree.. user and admin level resources. IF the request is being made to an admin resource, the auth user must be one that is authorized to use that resource. IF they are, the response has the resource link(s) that allow them to further do other things that a normal user can not. Without documenting what resources will result in a successful (or failed) attempt at an admin resource, the developer won't know what to scan for in the <links> elements I return and what they can do next. I have to document the returned links, the rel="" string value, and what each href resource pointer will allow them to do, so that the developer knows ahead of time and can make use of the resources as needed. To me, this is much like the facebook API.. I can't just go to facebook.com/api and from there magically know how to use whatever resources come back. I have to, as a developer providing my end users with the ablity to use facebook to log in, know what
resources to call, what params to pass, etc. Am I wrong on this assumption? IF so, please enlighten me such that I might understand how this would not be needed.
What confuses me about all this is the idea that we can write (and consume) evolveable APIs that we know nothing about. We simply need the entry URL and from there we just know what to do based on what is returned. Unless I am missing something, there is no standard set of link/rel values that work the same way for every API. Just because one rel="login" might indicate a resource to log in to, doesn't mean it won't do something else on another site. Like wise, any given API could return a variety of other rel="" values in the response links, or return entirely different element names and without some sort of documentation explaining all of this, I would not be able to consume it. I realize a HATEOAS API should be just like a web site..such that a web bot could traverse html <a> elements.. likewise we return <link> elements allowing a bot to traverse it. What throws me there is.. some links may be POST only, or UPDATE only, some may support GET,POST, etc.
A bot could be written in such a way to try every method type, see where it leads and crawl it's way through every link. As a developer using someone's API to provide my user's a GUI to use my site, I can't just go crawling through an API blindly and give my end users some sort of useful functionality from the API. I have to know exactly what resource to call (or how to navigate to it) and what it does. If I want to get the weather, I need to know how I pass my users location to the api, and what resource to call that supports me passing in the location and returns the weather for that location. Don't I?
Thanks.
--- On Fri, 12/2/11, mike amundsen <mamund@...m> wrote:
From: mike amundsen <mamund@...>
Subject: Re: [rest-discuss] weighting and boundaries of evolvability and loose coupling?
To: "Jakob Strauch" <jakob.strauch@...>
Cc: rest-discuss@yahoogroups.com
Date: Friday, December 2, 2011, 7:48 AM
Â
Jakob:
i've been doing some work in this area (evolvability for hypermedia-based systems) and, while my experiments are still not completed, i can pass along some observations that might give you some ideas.
first, IMO, you are correct to state that most all the "evolvability" is due to changes in the problem-domain. IOW, not the protocol (HTTP) and not the message formats (media types).
since REST focuses on sharing understanding through response representations that contain hypermedia to advance application flow, Â the focus of evolvability is (in my work) on the media type and the response representations.
the important task of writing hypermedia applications is mapping the problem domain details to elements in the media type. IOW, to evolve the system to match changes in the problem domain, you modify the representations and the hypermedia within those representations.
so, with that as a basis...
there are two different cases to consider:Human-driven user-agents (or Human-to-Machine - H2M) and,Machine-driven user-agents (or Machine-to-Machine - M2M).
H2M evolvability for hypermediain this case the "human" driving the user agent (UA) has "knowledge in the head" that the user agent does not have. the UA can focus just on recognizing, parsing, and rendering the media type representations and allowing the human to interpret the results and make choices based on the human's knowledge of the problem domain and the hypermedia affordances (links and forms) presented.
since the act of mapping intention (what i want to get done) to action (the links and forms available) is all handled by a human, servers are free to make quite a wide range of changes and the system will still function well. Â
for example, in H2M cases, the server is free to add/remove inputs elements in forms, add/remove links, change the "order" in which links/forms are presented, even introduce entirely new forms and inputs. All these things are not likely to "break" the system since the human can be reasonably expected to "know" the problem domain (or a similar domain) enough to make decisions along the way.Â
M2M evolvability for hypermediain this case there is no human *directly* driving the interactions between client and server. the UA is a 'bot' and has only the "knowledge in the code" to work with. This knowledge has to be "put" there by some human, of course.
for this scenario, the server has a much more limited set of evolvability options. severs can remove inputs, remove links/forms, and/or change the order of their appearance and still expect the system to "work properly." IOW, the server cannot add any new inputs, links, or forms and expect the 'bot' to "know" or "understand" these new elements.
FWIW, i think there are a number of ways to improve the M2M case, but i am not yet prepared to talk about that since i have not made much progress yet in this area.
i hope this gives you some ideas on how to tackle this problem and would be interested in other POVs and observations on this topic.
mcahttp://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
On Fri, Dec 2, 2011 at 06:48, Jakob Strauch <jakob.strauch@...> wrote:
one major aspect of using hypermedia is loose coupling and evolvability. but where are the boundaries? which server-side changes may/may not affect hypermedia-aware clients?
for example: as a human user, i can easely adapt to a change in a web shop system, e.g. when the order procedure include suddenly an additional step like providing an optional voucher code.
a hypermedia client can be easely redirected. but i´m sure the agent can´t do something useful there. at least not unless he gets tought how to deal with the new domain concept (e.g. a new media type) - which means updating to a new version.
so which aspect can be really decoupled? as far as i can see, there are "only" technical details like internal structures, urls. Speaking of evolvability, i think most changes to a growing API are more domain-related than technical-related.
If my assumptions are correct, is it maybe more important to develop hypermedia clients, which can be updated by hot-deploy mechanisms?
------------------------------------
Yahoo! Groups Links
  http://groups.yahoo.com/group/rest-discuss/
  Individual Email | Traditional
  http://groups.yahoo.com/group/rest-discuss/join
  (Yahoo! ID required)
  rest-discuss-digest@yahoogroups.com
  rest-discuss-fullfeatured@yahoogroups.com
  rest-discuss-unsubscribe@yahoogroups.com
  http://docs.yahoo.com/info/terms/
Kevin:
(my regrets for not responding sooner)
this is long, my apologies ahead of time. Hopefully the content will be
worth the time<g>.
<snip>
I think I get the gist of what you are saying, but I still struggle
understanding the various aspects of writing a good rest api as well as
consuming one.
</snip>
i think that is a common POV. there is not much guidance on this process.
this is proly a good place to discuss it. I'd also encourage you to join
the Hypermedia-Web discussion list[1] where some other folks working in
this area also hang out.
<snip>
I want to provide them with Facebook login to access the site.. am I now
considered the M2M in this equation.. in that I will be writing code on my
web app to interface with facebook api?
</snip>
Well, it turns out facebook's API is not very "hypermedia-aware" is it?
Actually almost all the OAuth examples I've seen are very difficult to
"automate" in an M2M environment; I suspect that's the goal. Often we can
"cover" an RPC implementation w/ a hypermedia-aware one (I do this quite a
bit), but sometimes you can't.
FWIW, I don't think the Facebook API is a good place to exercise your
hypermedia skills.
<snip>
What continually confuses me is the idea of trying to remain HATEOAS
compliant as I write my own API, and consuming an API as a developer.
</snip>
Designing a hypermedia API is, essentially, designing a media type (or
applying semantics to an existing media type). That's the API. It's a big
diff from most implementations. Some think it's not worth the trouble.
Once the media type is designed & documented, the work of implementation
servers and clients begins. Servers are pretty straight-forward. Tooling is
weak in most cases, but for the most part servers just wait for a request,
do some work and craft a response (which may/may-not contain one or more
hypermedia controls (links & forms).
Writing a client is more involved; not terribly difficult, but more work is
done by hypermedia clients (HC) than RPC clients. The HC must "know" the
media type (not the app) before it can function successfully. And yes, as
you say, this means writing clients that are prepared for just about any
reasonable response in that media type. You can limit the effort by
creating a restrictive, small-scope media type design. My Maze+XML design
has only ten elmenents (five are for errors and debugging), six attributes,
and nine link relations. Creating clients to navigate mazes is pretty
simple, too.
The HAL media type design is even more compact[2]. Now, implementing an HC
that can handle HTML is quite a feat. There is a wide spectrum between
Maze+XML and HTML, tho.
<snip>
Without documenting what resources will result in a successful (or failed)
attempt at an admin resource, the developer won't know what to scan for in
the <links> elements I return and what they can do next. I have to document
the returned links, the rel="" string value, and what each href resource
pointer will allow them to do, so that the developer knows ahead of time
and can make use of the resources as needed.
</snip>
Yes, you need to document the media type. There are a number of examples
out there to use as a guide. There is no need to document "all the possible
responses" for a media type (can you imagine what that would entail for
HTML?). Instead, you document the possible elements that can appear in a
response and the rules for those elements (MUST be child elements of X, MAY
have the following children, etc.).
<snip>
What confuses me about all this is the idea that we can write (and consume)
evolveable APIs that we know nothing about.
</snip>
Yeah, that confuses me, too. I don't talk like that and suggest anyone
telling you this "you can write and consume an API that you know nothing
about") is full of it. If you hear me saying that, call BS on me ASAP!
<snip>
there is no standard set of link/rel values that work the same way for
every API.
</snip>
first, just as there is no standard semantic for every problem domain,
you're not likely to find a single set of standard rel values for every
API. However, there are a couple sources for standardized rels include the
the IANA[3], the Microformat group[4], and the Dublin Core[5]. Many media
types also define their own rels set (including HTML).
It is also possible to define and standardize your own rels (in cases where
you think an important one is missing). I've done that at the Microformats
site and am in the process of doing the same via an IETF Internet Draft.
In the end, you'll find that rels provide key mapping between the problem
domain and the media type. this means, unless your problem domain is
incredibly common, you'll be using some unique rels in order to express
unique problem domain semantics.
<snip>
What throws me there is.. some links may be POST only, or UPDATE only, some
may support GET,POST, etc.
</snip>
Technically, the *links* don't hold the rules, the markup *around* the
links does. HTML.FORM@method="get" tells you what you need to know. So does
atom.link@rel="edit" Now, when you design your own API (XML, JSON, etc.)
you'll be responsible for taking care to design these same protocol-level
details. If you are using HTTP, the possibilities are few and it's not at
all hard to design media type elements that clients can easily recognize
(<update href="..." /> OR {"delete" : {"href":..."}}, etc.).
Again, this way of designing APIs (the way that includes the hypermedia
possiblities in responses, not just the data) is not at all common right
now.
<snip>
As a developer using someone's API to provide my user's a GUI to use my
site, I can't just go crawling through an API blindly and give my end users
some sort of useful functionality from the API.
</snip>
Yep, as stated earlier, anyone telling you to "blindly crawl" is tossing
BS. That not at all needed.
<snip>
I have to know exactly what resource to call (or how to navigate to it) and
what it does.
</snip>
Well, your version of "exactly" may vary, but yes, client apps will need to
know how get convert "intention" into "action." That's what APIs are for.
This is the same whether you use SOAP, URI-RPC, Hypermedia, etc. The key is
"how does the client know" With most forms of API, the clients knows
because a document sez so and the developer hard-codes this "knowing" into
the client. With hypermedia the document sez "this is how you will 'know'
where the weather can be found" and describes the bits that can appear in a
response, even the link relation to use to get those bits:
<!-- this is the representation for current weather -->
<p class="current-weather">
<span class="zipcode" />
<span class="location-name" />
<span class="current-temp" />
</p>
<!-- this affordance allows clients to get weather reports based on zipcode
-->
<form class="weather" action="..." method="get">
<input type="text" name="zipcode" value="" />
</form>
<!-- this affordance allows clients to find the form that allows clients to
get weather reports -->
<a href="..." rel="weather" />weather</a>
<!-- this affordance allows clients to find weather affordances[grin] -->
<form class="api-list" action="..." method="get">
<input type="text" name="rel-or-class" value="" />
</form>
<!-- this is the only URI needed to use the API -->
http://example.org/weather
I bet most people will understand this HTML-based "Hypermedia API" and I
bet most people can write both a client and server implementation for it. I
even bet the server and client implementations can be done independently,
on different platforms, at different times, etc. and still work together
just fine. I also bet this particular design would work for both H2M and
M2M implementations. And yes, all my other ramblings about the possible
evolvability (for H2M and M2M) for this design still applies.
Sure, this example is incomplete and trivial, but it has the basics for all
complete, non-trivial implementations.
I hope this gives you some ideas.
Mike
[1] https://groups.google.com/forum/#!forum/hypermedia-web
[2] http://stateless.co/hal_specification.html
[3] http://www.iana.org/assignments/link-relations/link-relations.xml
[4] http://microformats.org/wiki/index.php?title=rels&rcid=56790
[5] http://dublincore.org/documents/dces/
On Fri, Dec 2, 2011 at 15:14, Kevin Duffey <andjarnic@...> wrote:
>
>
> Mike,
>
> I think I get the gist of what you are saying, but I still struggle
> understanding the various aspects of writing a good rest api as well as
> consuming one. In your example of M2M, if I am a developer writing a web
> application with a UI front end that users visit/log in/etc, and I want to
> provide them with Facebook login to access the site.. am I now considered
> the M2M in this equation.. in that I will be writing code on my web app to
> interface with facebook api? What continually confuses me is the idea of
> trying to remain HATEOAS compliant as I write my own API, and consuming an
> API as a developer. What I mean is, as the API developer, I am trying to
> provide a HATEOAS compliant API.. one that returns response with links that
> MUST be followed by the consumer, and only those links. But as of now, I
> still have to provide documentation that explains to a developer the
> possible links that can be returned from each resource. For example, my API
> provides access control to a degree.. user and admin level resources. IF
> the request is being made to an admin resource, the auth user must be one
> that is authorized to use that resource. IF they are, the response has the
> resource link(s) that allow them to further do other things that a normal
> user can not. Without documenting what resources will result in a
> successful (or failed) attempt at an admin resource, the developer won't
> know what to scan for in the <links> elements I return and what they can do
> next. I have to document the returned links, the rel="" string value, and
> what each href resource pointer will allow them to do, so that the
> developer knows ahead of time and can make use of the resources as needed.
> To me, this is much like the facebook API.. I can't just go to
> facebook.com/api and from there magically know how to use whatever
> resources come back. I have to, as a developer providing my end users with
> the ablity to use facebook to log in, know what resources to call, what
> params to pass, etc. Am I wrong on this assumption? IF so, please enlighten
> me such that I might understand how this would not be needed.
>
> What confuses me about all this is the idea that we can write (and
> consume) evolveable APIs that we know nothing about. We simply need the
> entry URL and from there we just know what to do based on what is returned.
> Unless I am missing something, there is no standard set of link/rel values
> that work the same way for every API. Just because one rel="login" might
> indicate a resource to log in to, doesn't mean it won't do something else
> on another site. Like wise, any given API could return a variety of other
> rel="" values in the response links, or return entirely different element
> names and without some sort of documentation explaining all of this, I
> would not be able to consume it. I realize a HATEOAS API should be just
> like a web site..such that a web bot could traverse html <a> elements..
> likewise we return <link> elements allowing a bot to traverse it. What
> throws me there is.. some links may be POST only, or UPDATE only, some may
> support GET,POST, etc. A bot could be written in such a way to try every
> method type, see where it leads and crawl it's way through every link. As a
> developer using someone's API to provide my user's a GUI to use my site, I
> can't just go crawling through an API blindly and give my end users some
> sort of useful functionality from the API. I have to know exactly what
> resource to call (or how to navigate to it) and what it does. If I want to
> get the weather, I need to know how I pass my users location to the api,
> and what resource to call that supports me passing in the location and
> returns the weather for that location. Don't I?
>
> Thanks.
>
>
> --- On *Fri, 12/2/11, mike amundsen <mamund@...>* wrote:
>
>
> From: mike amundsen <mamund@...>
> Subject: Re: [rest-discuss] weighting and boundaries of evolvability and
> loose coupling?
> To: "Jakob Strauch" <jakob.strauch@...>
> Cc: rest-discuss@yahoogroups.com
> Date: Friday, December 2, 2011, 7:48 AM
>
>
>
>
> Jakob:
>
> i've been doing some work in this area (evolvability for hypermedia-based
> systems) and, while my experiments are still not completed, i can pass
> along some observations that might give you some ideas.
>
> first, IMO, you are correct to state that most all the "evolvability" is
> due to changes in the problem-domain. IOW, not the protocol (HTTP) and not
> the message formats (media types).
>
> since REST focuses on sharing understanding through response
> representations that contain hypermedia to advance application flow, the
> focus of evolvability is (in my work) on the media type and the response
> representations.
>
> the important task of writing hypermedia applications is mapping the
> problem domain details to elements in the media type. IOW, to evolve the
> system to match changes in the problem domain, you modify the
> representations and the hypermedia within those representations.
>
> so, with that as a basis...
>
> there are two different cases to consider:
> Human-driven user-agents (or Human-to-Machine - H2M) and,
> Machine-driven user-agents (or Machine-to-Machine - M2M).
>
> H2M evolvability for hypermedia
> in this case the "human" driving the user agent (UA) has "knowledge in the
> head" that the user agent does not have. the UA can focus just on
> recognizing, parsing, and rendering the media type representations and
> allowing the human to interpret the results and make choices based on the
> human's knowledge of the problem domain and the hypermedia affordances
> (links and forms) presented.
>
> since the act of mapping intention (what i want to get done) to action
> (the links and forms available) is all handled by a human, servers are free
> to make quite a wide range of changes and the system will still function
> well.
>
> for example, in H2M cases, the server is free to add/remove inputs
> elements in forms, add/remove links, change the "order" in which
> links/forms are presented, even introduce entirely new forms and inputs.
> All these things are not likely to "break" the system since the human can
> be reasonably expected to "know" the problem domain (or a similar domain)
> enough to make decisions along the way.
>
> M2M evolvability for hypermedia
> in this case there is no human *directly* driving the interactions between
> client and server. the UA is a 'bot' and has only the "knowledge in the
> code" to work with. This knowledge has to be "put" there by some human, of
> course.
>
> for this scenario, the server has a much more limited set of evolvability
> options. severs can remove inputs, remove links/forms, and/or change the
> order of their appearance and still expect the system to "work properly."
> IOW, the server cannot add any new inputs, links, or forms and expect the
> 'bot' to "know" or "understand" these new elements.
>
> FWIW, i think there are a number of ways to improve the M2M case, but i am
> not yet prepared to talk about that since i have not made much progress yet
> in this area.
>
> i hope this gives you some ideas on how to tackle this problem and would
> be interested in other POVs and observations on this topic.
> mca
> http://amundsen.com/blog/
> http://twitter.com@mamund
> http://mamund.com/foaf.rdf#me
>
>
>
>
>
>
>
> On Fri, Dec 2, 2011 at 06:48, Jakob Strauch <jakob.strauch@...<http://mc/compose?to=jakob.strauch@...>
> > wrote:
>
> one major aspect of using hypermedia is loose coupling and evolvability.
> but where are the boundaries? which server-side changes may/may not affect
> hypermedia-aware clients?
>
> for example: as a human user, i can easely adapt to a change in a web shop
> system, e.g. when the order procedure include suddenly an additional step
> like providing an optional voucher code.
>
> a hypermedia client can be easely redirected. but i´m sure the agent can´t
> do something useful there. at least not unless he gets tought how to deal
> with the new domain concept (e.g. a new media type) - which means updating
> to a new version.
>
> so which aspect can be really decoupled? as far as i can see, there are
> "only" technical details like internal structures, urls. Speaking of
> evolvability, i think most changes to a growing API are more domain-related
> than technical-related.
>
> If my assumptions are correct, is it maybe more important to develop
> hypermedia clients, which can be updated by hot-deploy mechanisms?
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
>
>
>
hello mike.
just adding something here that might add an extra design dimension.
On 2011-12-02 07:48 , mike amundsen wrote:
> M2M evolvability for hypermedia
> in this case there is no human *directly* driving the interactions
> between client and server. the UA is a 'bot' and has only the "knowledge
> in the code" to work with. This knowledge has to be "put" there by some
> human, of course.
> for this scenario, the server has a much more limited set of
> evolvability options. severs can remove inputs, remove links/forms,
> and/or change the order of their appearance and still expect the system
> to "work properly." IOW, the server cannot add any new inputs, links, or
> forms and expect the 'bot' to "know" or "understand" these new elements.
well, that's not entirely true. media formats should be designed with
extensibility in mind, so that servers can add stuff without breaking
clients. and then there are two options:
- extensions are allowed and are ignored by definition. this allows
servers to add stuff without breaking clients. it does not allow servers
to make sure that old clients will understand that they shouldn't be
doing things the old way.
- extensions are allowed and there are switches that allow servers to
communicate whether an extension is mandatory. HTML (and thus option one
presented above) says "mustIgnore" implicitly for all extensions. media
types can define "mustIgnore" and/or "mustUnderstand" labels that
clients must interpret, so that an extension can be safely ignored by an
old client, or that a old client knows that it should stop because there
is an extension in a representation that it does not understand, but
that is labeled "mustUnderstand".
this latter design allows more nuances in evolving media types, but of
course makes both the media type and the client implementation more complex.
cheers,
dret.
--
erik wilde | mailto:dret@... - tel:+1-510-2061079 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
Erik: I stated: "the server cannot add any new inputs, links, or forms and expect the 'bot' to "know" or "understand" these new elements." Does your response: "well, that's not entirely true...." apply to my statement above? mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Fri, Dec 2, 2011 at 20:26, Erik Wilde <dret@...> wrote: > hello mike. > > just adding something here that might add an extra design dimension. > > > On 2011-12-02 07:48 , mike amundsen wrote: > >> M2M evolvability for hypermedia >> in this case there is no human *directly* driving the interactions >> between client and server. the UA is a 'bot' and has only the "knowledge >> in the code" to work with. This knowledge has to be "put" there by some >> human, of course. >> for this scenario, the server has a much more limited set of >> evolvability options. severs can remove inputs, remove links/forms, >> and/or change the order of their appearance and still expect the system >> to "work properly." IOW, the server cannot add any new inputs, links, or >> forms and expect the 'bot' to "know" or "understand" these new elements. >> > > well, that's not entirely true. media formats should be designed with > extensibility in mind, so that servers can add stuff without breaking > clients. and then there are two options: > > - extensions are allowed and are ignored by definition. this allows > servers to add stuff without breaking clients. it does not allow servers to > make sure that old clients will understand that they shouldn't be doing > things the old way. > > - extensions are allowed and there are switches that allow servers to > communicate whether an extension is mandatory. HTML (and thus option one > presented above) says "mustIgnore" implicitly for all extensions. media > types can define "mustIgnore" and/or "mustUnderstand" labels that clients > must interpret, so that an extension can be safely ignored by an old > client, or that a old client knows that it should stop because there is an > extension in a representation that it does not understand, but that is > labeled "mustUnderstand". > > this latter design allows more nuances in evolving media types, but of > course makes both the media type and the client implementation more complex. > > cheers, > > dret. > > -- > erik wilde | mailto:dret@... - tel:+1-510-2061079 | > | UC Berkeley - School of Information (ISchool) | > | http://dret.net/netdret http://twitter.com/dret | >
hello mike.
On 2011-12-02 17:40 , mike amundsen wrote:
> I stated: "the server cannot add any new inputs, links, or forms and
> expect the 'bot' to "know" or "understand" these new elements."
> Does your response: "well, that's not entirely true...." apply to my
> statement above?
yes it does, on a meta level, but my main intent was definitely not to
say that you're wrong. if the media type is designed for it, the server
can communicate to the client "you must understand this extension to
proceed", or it can say "you can safely ignore this and proceed". that
is a level of understanding, but admittedly only in a very restricted
way. it's not understanding the semantics of the extension, but
understanding how it has to be handled as an extension.
cheers,
dret.
--
erik wilde | mailto:dret@... - tel:+1-510-2061079 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
Erik: Ok, i think i understand your POV. you're saying that a media type designer can, for example, "bake in" a design element (which all clients/servers must support) that signals a "MustUnderstand" rule. Thus, a M2M client can recognize that a response contains new "MustUnderstand" information and, if that client doesn't "understand" it, can act appropriately (stop processing, etc.). In the example above, the M2M client cannot "evolve" to process the new information, but _can_ tell anyone who cares to know that it has failed to do so. right? mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Fri, Dec 2, 2011 at 20:52, Erik Wilde <dret@...> wrote: > hello mike. > > On 2011-12-02 17:40 , mike amundsen wrote: > > I stated: "the server cannot add any new inputs, links, or forms and > > expect the 'bot' to "know" or "understand" these new elements." > > Does your response: "well, that's not entirely true...." apply to my > > statement above? > > yes it does, on a meta level, but my main intent was definitely not to > say that you're wrong. if the media type is designed for it, the server > can communicate to the client "you must understand this extension to > proceed", or it can say "you can safely ignore this and proceed". that > is a level of understanding, but admittedly only in a very restricted > way. it's not understanding the semantics of the extension, but > understanding how it has to be handled as an extension. > > cheers, > > dret. > > -- > erik wilde | mailto:dret@... - tel:+1-510-2061079 | > | UC Berkeley - School of Information (ISchool) | > | http://dret.net/netdret http://twitter.com/dret | > > > ------------------------------------ > > Yahoo! Groups Links > > > >
hello mike.
On 2011-12-02 18:00 , mike amundsen wrote:
> In the example above, the M2M client cannot "evolve" to process the new
> information, but _can_ tell anyone who cares to know that it has failed
> to do so. right?
exactly. it might sound like a minor thing, but it's actually pretty
major if a client knows when it shouldn't proceed and can signal an
error condition, instead of blindly continuing down a path that's not a
safe route to go without understanding the new stuff. still, it's added
complication, and most generic media types seem to go the route of
baking in "mustIgnore" as the only possible semantics of how to handle
unknown extensions. HTML and Atom are two popular examples.
cheers,
dret.
--
erik wilde | mailto:dret@... - tel:+1-510-2061079 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
Erik: ok, i'm getting you. thanks. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Fri, Dec 2, 2011 at 21:17, Erik Wilde <dret@...> wrote: > hello mike. > > > On 2011-12-02 18:00 , mike amundsen wrote: > >> In the example above, the M2M client cannot "evolve" to process the new >> information, but _can_ tell anyone who cares to know that it has failed >> to do so. right? >> > > exactly. it might sound like a minor thing, but it's actually pretty major > if a client knows when it shouldn't proceed and can signal an error > condition, instead of blindly continuing down a path that's not a safe > route to go without understanding the new stuff. still, it's added > complication, and most generic media types seem to go the route of baking > in "mustIgnore" as the only possible semantics of how to handle unknown > extensions. HTML and Atom are two popular examples. > > > cheers, > > dret. > > -- > erik wilde | mailto:dret@... - tel:+1-510-2061079 | > | UC Berkeley - School of Information (ISchool) | > | http://dret.net/netdret http://twitter.com/dret | >
hello again...
On 2011-12-02 18:00 , mike amundsen wrote:
> Ok, i think i understand your POV. you're saying that a media type
> designer can, for example, "bake in" a design element (which all
> clients/servers must support) that signals a "MustUnderstand" rule.
as a corollary to what i just said: i was thinking about, let's say in
XML/XSD terms, a global attribute you can put on elements to signal
that. but oftentimes, a version attribute somewhere does this for all of
the representation, effectively disallowing a client to continue to
proceed if it encounters an unknown version. the big disadvantage of
this "document-level attribute" is that it disallows the use of
*everything*, including old stuff that still might be safe to use for
the client. which is the reason why version attributes often are a bit
too disruptive in a loosely coupled scenario.
another approach to this would be to remove this from representation
design altogether and use relations to communicate extensions, something
that has been discussed by mark nottingham in his recent blog post
http://www.mnot.net/blog/2011/10/25/web_api_versioning_smackdown . you
could possibly extend his method to also qualify those links as being
mandatory or not. the link-based extension mark proposes is an
interesting approach, but doesn't work too well in cases where
extensions need to be put in certain places in existing representations
(think documents instead of data), instead of just being a bag of
additional data clients can get to, if they want to get it.
cheers,
dret.
--
erik wilde | mailto:dret@... - tel:+1-510-2061079 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
Erik: to-date, my approach for handling modifying media type designs (over time) has been as follows: - Extend (compatible w/ existing implementations) no changes to existing features (appearance, required/optional, processing, or meaning) all new features are optional * optionally add "schema" identifiers to show which extension(s) you are using -Version (incompatible w/ existing implementations) can change existing features can add new required elements * required to use new media type identifier this has allowed me a great deal of flexibility and stability on projects that have evolved over several years. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Fri, Dec 2, 2011 at 21:29, Erik Wilde <dret@...> wrote: > hello again... > > > On 2011-12-02 18:00 , mike amundsen wrote: > >> Ok, i think i understand your POV. you're saying that a media type >> designer can, for example, "bake in" a design element (which all >> clients/servers must support) that signals a "MustUnderstand" rule. >> > > as a corollary to what i just said: i was thinking about, let's say in > XML/XSD terms, a global attribute you can put on elements to signal that. > but oftentimes, a version attribute somewhere does this for all of the > representation, effectively disallowing a client to continue to proceed if > it encounters an unknown version. the big disadvantage of this > "document-level attribute" is that it disallows the use of *everything*, > including old stuff that still might be safe to use for the client. which > is the reason why version attributes often are a bit too disruptive in a > loosely coupled scenario. > > another approach to this would be to remove this from representation > design altogether and use relations to communicate extensions, something > that has been discussed by mark nottingham in his recent blog post > http://www.mnot.net/blog/2011/**10/25/web_api_versioning_**smackdown<http://www.mnot.net/blog/2011/10/25/web_api_versioning_smackdown>. you could possibly extend his method to also qualify those links as being > mandatory or not. the link-based extension mark proposes is an interesting > approach, but doesn't work too well in cases where extensions need to be > put in certain places in existing representations (think documents instead > of data), instead of just being a bag of additional data clients can get > to, if they want to get it. > > > cheers, > > dret. > > -- > erik wilde | mailto:dret@... - tel:+1-510-2061079 | > | UC Berkeley - School of Information (ISchool) | > | http://dret.net/netdret http://twitter.com/dret | >
Hi, thank you for all the good replies. It has helped my understanding. With regards to this reply, can you give an example of how you use (and document) extend and version attributes? I am trying to figure out for example what a response xml might look like and how a consumer might use either/or attribute. Thanks Sent from my ASUS Eee Pad mike amundsen <mamund@yahoo.com> wrote: >Erik: > >to-date, my approach for handling modifying media type designs (over time) >has been as follows: > >- Extend (compatible w/ existing implementations) >no changes to existing features (appearance, required/optional, processing, >or meaning) >all new features are optional >* optionally add "schema" identifiers to show which extension(s) you are >using > >-Version (incompatible w/ existing implementations) >can change existing features >can add new required elements >* required to use new media type identifier > >this has allowed me a great deal of flexibility and stability on projects >that have evolved over several years. > >mca >http://amundsen.com/blog/ >http://twitter.com@mamund >http://mamund.com/foaf.rdf#me > > > > >On Fri, Dec 2, 2011 at 21:29, Erik Wilde <dret@berkeley.edu> wrote: > >> hello again... >> >> >> On 2011-12-02 18:00 , mike amundsen wrote: >> >>> Ok, i think i understand your POV. you're saying that a media type >>> designer can, for example, "bake in" a design element (which all >>> clients/servers must support) that signals a "MustUnderstand" rule. >>> >> >> as a corollary to what i just said: i was thinking about, let's say in >> XML/XSD terms, a global attribute you can put on elements to signal that. >> but oftentimes, a version attribute somewhere does this for all of the >> representation, effectively disallowing a client to continue to proceed if >> it encounters an unknown version. the big disadvantage of this >> "document-level attribute" is that it disallows the use of *everything*, >> including old stuff that still might be safe to use for the client. which >> is the reason why version attributes often are a bit too disruptive in a >> loosely coupled scenario. >> >> another approach to this would be to remove this from representation >> design altogether and use relations to communicate extensions, something >> that has been discussed by mark nottingham in his recent blog post >> http://www.mnot.net/blog/2011/**10/25/web_api_versioning_**smackdown<http://www.mnot.net/blog/2011/10/25/web_api_versioning_smackdown>. you could possibly extend his method to also qualify those links as being >> mandatory or not. the link-based extension mark proposes is an interesting >> approach, but doesn't work too well in cases where extensions need to be >> put in certain places in existing representations (think documents instead >> of data), instead of just being a bag of additional data clients can get >> to, if they want to get it. >> >> >> cheers, >> >> dret. >> >> -- >> erik wilde | mailto:dret@berkeley.edu - tel:+1-510-2061079 | >> | UC Berkeley - School of Information (ISchool) | >> | http://dret.net/netdret http://twitter.com/dret | >>
On Sat, Dec 3, 2011 at 2:17 AM, Erik Wilde <dret@...> wrote: > hello mike. > > On 2011-12-02 18:00 , mike amundsen wrote: >> In the example above, the M2M client cannot "evolve" to process the new >> information, but _can_ tell anyone who cares to know that it has failed >> to do so. right? > > exactly. it might sound like a minor thing, No, it just sounds like an unrealistic expectation - the odds of developers building machine clients that follow this advice, in practice, are quite low. This means you are still going to have to find a safe way to deal with disrespectful client behaviour anyway, at which point you've effectively achieved nothing. If you're using link relations there's a much easier way of dealing with this: create a new relation and 'decommission' the old relation. i.e. old clients won't find the relation they're looking for and will bomb out. Cheers, Mike
Kevin:
as an example, i'll riff on the "weather" design i posted earlier in this
thread.
below is an "extension" of the weather media type design (i.e. adding this
will not break existing implemenations). Note the new optional
HTML.INPUT@name="include-five-day-forecast" state transition element that
MAY appear in the HTML.FORM@class="weather" block and the new
HTML.SPAN@class="five-day-forecast" element that MAY appear in the
HTML.P@class="current-weather" response.
<!-- this is the representation for current weather -->
<p class="current-weather">
<span class="zipcode" />
<span class="location-name" />
<span class="current-temp" />
<!-- new OPTIONAL element -->
<span class="five-day-forecast" />
</p>
<!-- this affordance allows clients to get weather reports based on zipcode
-->
<form class="weather" action="..." method="get">
<input type="text" name="zipcode" value="" />
<-- new OPTIONAL element, defaults to "false" -->
<input type="checkbox" name="include-five-day-forecast" />
</form>
Now, here is a design alteration that is a "breaking change" - a new
"version" - of the weather media type design:
<!-- this affordance allows clients to get weather reports based on zipcode
-->
<form class="weather2" action="..." method="get">
<input type="text" name="zipcode" value="" />
<!-- new REQUIRED element -->
<select name="temperature-scale">
<option value="Fahrenheit">Fahrenheit</option>
<option value="Celsius">Celsius</option>
</select>
<-- new optional element, defaults to "false" -->
<input type="checkbox" name="include-five-day-forecast" />
</form>
Enforcing the Version Change
In this example, since I used an existing media type (text/html), changing
the media type identifier to enforce the version change is not a reasonable
option. Instead, servers that want to "force" this new version, must change
the identifier for the transition ("weather" -> "weather2") - which is,
essentially a *new* transition - and stop including the "old" version
transition in responses. M2M clients will no longer be able to find (and
activate) the expected transition ("weather"), preventing them from
participating with the server (they are now "broken"). H2M clients will
likely be able to depend on the human driver to successfully handle this
evolution and will be able to continue talking with this server (they have
"evolved").
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
On Sat, Dec 3, 2011 at 02:29, Andjarnic <andjarnic@...> wrote:
> Hi, thank you for all the good replies. It has helped my understanding.
> With regards to this reply, can you give an example of how you use (and
> document) extend and version attributes? I am trying to figure out for
> example what a response xml might look like and how a consumer might use
> either/or attribute. Thanks
>
> Sent from my ASUS Eee Pad
>
>
> mike amundsen <mamund@...> wrote:
>
>
>
> Erik:
>
> to-date, my approach for handling modifying media type designs (over time)
> has been as follows:
>
> - Extend (compatible w/ existing implementations)
> no changes to existing features (appearance, required/optional,
> processing, or meaning)
> all new features are optional
> * optionally add "schema" identifiers to show which extension(s) you are
> using
>
> -Version (incompatible w/ existing implementations)
> can change existing features
> can add new required elements
> * required to use new media type identifier
>
> this has allowed me a great deal of flexibility and stability on projects
> that have evolved over several years.
>
> mca
> http://amundsen.com/blog/
> http://twitter.com@mamund
> http://mamund.com/foaf.rdf#me
>
>
>
>
> On Fri, Dec 2, 2011 at 21:29, Erik Wilde <dret@...> wrote:
>
>> hello again...
>>
>>
>> On 2011-12-02 18:00 , mike amundsen wrote:
>>
>>> Ok, i think i understand your POV. you're saying that a media type
>>> designer can, for example, "bake in" a design element (which all
>>> clients/servers must support) that signals a "MustUnderstand" rule.
>>>
>>
>> as a corollary to what i just said: i was thinking about, let's say in
>> XML/XSD terms, a global attribute you can put on elements to signal that.
>> but oftentimes, a version attribute somewhere does this for all of the
>> representation, effectively disallowing a client to continue to proceed if
>> it encounters an unknown version. the big disadvantage of this
>> "document-level attribute" is that it disallows the use of *everything*,
>> including old stuff that still might be safe to use for the client. which
>> is the reason why version attributes often are a bit too disruptive in a
>> loosely coupled scenario.
>>
>> another approach to this would be to remove this from representation
>> design altogether and use relations to communicate extensions, something
>> that has been discussed by mark nottingham in his recent blog post
>> http://www.mnot.net/blog/2011/**10/25/web_api_versioning_**smackdown<http://www.mnot.net/blog/2011/10/25/web_api_versioning_smackdown>. you could possibly extend his method to also qualify those links as being
>> mandatory or not. the link-based extension mark proposes is an interesting
>> approach, but doesn't work too well in cases where extensions need to be
>> put in certain places in existing representations (think documents instead
>> of data), instead of just being a bag of additional data clients can get
>> to, if they want to get it.
>>
>>
>> cheers,
>>
>> dret.
>>
>> --
>> erik wilde | mailto:dret@... - tel:+1-510-2061079 |
>> | UC Berkeley - School of Information (ISchool) |
>> | http://dret.net/netdret http://twitter.com/dret |
>>
>
>
>
Some thoughts along vaguely similar lines - http://www.mnot.net/blog/2011/08/28/better_browser_caching Cheers, On 15/12/2011, at 7:16 AM, Mike Kelly wrote: > Hi, > > Is anyone aware of any proposals that extend HTTP to allow servers and > clients to negotiate client-side storage allocation for client-side > (private) caches? > > Basically, I'm looking for a way for a server to indicate how much > storage should be allocated for caching responses from a particular > domain name, and possibly also for the client to be able to indicate > how much allocation was actually possible. > > Aside from that, if you have any thoughts on whether or not this is > really feasible or is just a plain bad idea - please let me know > > Thanks, > Mike > -- Mark Nottingham http://www.mnot.net/
I'm a little uncomfortable calling this "negotiation"; the model I have in mind is that a site might request a larger allocation than the default, and the UA would ask the user (or possibly, the user would pre-configure to accept or deny). Wherever possible, though, the browser should probably use a heuristic, to keep it simple (From a UX perspective). Cheers, On 15/12/2011, at 9:38 AM, Mike Kelly wrote: > Nice one thanks Mark, +1 to all of that post > > What do you think about handling the negotiation via HTTP? > > Cheers, > Mike > > On Wed, Dec 14, 2011 at 10:25 PM, Mark Nottingham <mnot@...> wrote: >> Some thoughts along vaguely similar lines - >> >> http://www.mnot.net/blog/2011/08/28/better_browser_caching >> >> Cheers, >> >> >> On 15/12/2011, at 7:16 AM, Mike Kelly wrote: >> >>> Hi, >>> >>> Is anyone aware of any proposals that extend HTTP to allow servers and >>> clients to negotiate client-side storage allocation for client-side >>> (private) caches? >>> >>> Basically, I'm looking for a way for a server to indicate how much >>> storage should be allocated for caching responses from a particular >>> domain name, and possibly also for the client to be able to indicate >>> how much allocation was actually possible. >>> >>> Aside from that, if you have any thoughts on whether or not this is >>> really feasible or is just a plain bad idea - please let me know >>> >>> Thanks, >>> Mike >>> >> >> -- >> Mark Nottingham >> http://www.mnot.net/ >> >> >> >> -- Mark Nottingham http://www.mnot.net/
Hey everyone! I just wanted to draw attention to the discussion here: https://github.com/rails/rails/pull/505 While the pull request is originally from May, there's some discussion about adding support for PATCH in Rails 3.2. If you read the discussion, there's a lot of controversy over the semantics of PUT and partial updates, discussion about if and when PATCH will become more than a draft standard, and all kinds of fun stuff. I thought this discussion might be relevant to this community. - Steve
* * * * * * * * * * * * * * * * * * * * * * *
LAPIS 2012
Linked APIs for the Semantic Web
ESWC 2012 workshop
http://lapis2012.linkedservices.org/
* * * * * * * * * * * * * * * * * * * * * * *
"The Web as I envisioned it, we have not seen it yet."
– Tim Berners-Lee
"Services and the Semantic Web: it's complicated."
– anonymous gossip
"Semantic Web and APIs: an essential problem in need of a fresh look."
– LAPIS 2012 mission
=======================
Challenge for Papers – What do *you* have to say about Linked APIs and the Semantic Web?
=======================
LAPIS 2012 in 5 questions
-------------------------
Why? The Web has changed: services become resource-oriented APIs. We must react now.
Goal? Exploring the opportunities resource-oriented APIs offer, especially in combination with links.
For whom? Motivated researchers from the REST, Semantic Web, and Linked Data communities.
What? A truly interactive workshop, driven by constructive discussion and dialog.
Format? An inspiring day. Morning: talks and dialog. Afternoon: brainstorm and discussion
LAPIS 2012 in 5 bullets
-----------------------
The main goal of the LAPIS workshop is to give birth to new ideas and visions, through presentations that encourage interaction and discussion. Topics of discussion include:
- defining Linked APIs, what they could look like, and what role links can play
- identifying the essential building blocks for enabling Linked APIs
- pinpointing challenges to move from resource-oriented APIs towards Linked APIs
- capturing added value of Linked APIs for the Semantic Web and REST communities
- designing applications by connecting Linked Data and Linked APIs for reading and writing
The above list is not exhaustive and we therefore actively encourage participants to be creative.
LAPIS 2012 wants your submission
--------------------------------
Regular paper (8 pages)
Regular papers focus on new ideas or technologies you have developed that relate to Linked APIs.
We are very open-minded towards the workshop scope, and expect the same from you.
Be original. Be creative. But most of all: be at least a little controversial – generate discussion.
We're not looking for the next Big Invention. Workshop participants want to discover and to learn.
Details: http://lapis2012.linkedservices.org/call-for-papers/
Vision paper (4 pages)
Vision papers focus on creative ideas and concepts, even if there are no concrete results yet.
Having more questions than answers can in fact be a plus… if you find the right questions.
Details: http://lapis2012.linkedservices.org/call-for-papers/
Wild ideas and discussion starters (1 paragraph)
Besides the traditional papers component, we would also like to run an experiment.
We want your wildest ideas and discussion topics to make LAPIS 2012 an interactive workshop.
Details: http://lapis2012.linkedservices.org/call-for-papers/
Motivated for this challenge?
-----------------------------
Great! Visit us at http://lapis2012.linkedservices.org/
Your deadline is March 4th, 2012.
LAPIS 2012 is organized by Craig Knoblock, Barry Norton, Ruben Verborgh, Sebastian Speiser, and Maria Maleshkova.
LAPIS 2012 is driven by you, its participants. Come and discuss with us!
http://lapis2012.linkedservices.org/
I created a question on Stack Overflow about this a while ago: http://stackoverflow.com/questions/2364110/whats-the-justification-behind-disallowing-partial-put I still don't really understand the benefit of not allowing PUT to be partial, but Jan's answer got auto-accepted. Darrel are you able to reopen the question? :) Cheers, Mike On Wed, Dec 14, 2011 at 10:40 PM, Steve Klabnik <steve@...> wrote: > Hey everyone! > > I just wanted to draw attention to the discussion here: > https://github.com/rails/rails/pull/505 > > While the pull request is originally from May, there's some discussion > about adding support for PATCH in Rails 3.2. If you read the > discussion, there's a lot of controversy over the semantics of PUT and > partial updates, discussion about if and when PATCH will become more > than a draft standard, and all kinds of fun stuff. > > I thought this discussion might be relevant to this community. > > - Steve > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Dec 15, 2011, at 1:42 AM, Mike Kelly wrote: > I created a question on Stack Overflow about this a while ago: > > http://stackoverflow.com/questions/2364110/whats-the-justification-behind-disallowing-partial-put > > I still don't really understand the benefit of not allowing PUT to be > partial, So you are asking, why PUT was defined as idempotent in the first place, yes? I think the reason is sort of "because we can define it that way". There is POST, which has no visibility (==POST is meaningless to an intermediary) and everything could just be done with POST. But then, adding methods that *have* visibility adds some serious capabilities to HTTP. E.g. GET's semantics allow for caching and it is also very helpful that we know that GET is safe - we can call it any number of times. You could do updates with POST, but then, full updates have the inherent property of being idempotent so it makes sense to define a method for that, leveraging the idempotency to the protocol level. Same for DELETE. For example, no a cache can mark copies it has of a response as stale upon a corresponding response to a PUT. PUT's idempotency is also a win over just using POST. If the interactions you design match the semantics of PUT, use it (for its added visibility), if they do not match, just use POST. Redefining PUT to mean 'partial update without idempotency' is no win because 'partial update without idempotency' does not give an intermediary any visibility. You could just use POST in the first place. Does that help? Jan > but Jan's answer got auto-accepted. Darrel are you able to > reopen the question? :) > > Cheers, > Mike > > On Wed, Dec 14, 2011 at 10:40 PM, Steve Klabnik <steve@steveklabnik.com> wrote: > > Hey everyone! > > > > I just wanted to draw attention to the discussion here: > > https://github.com/rails/rails/pull/505 > > > > While the pull request is originally from May, there's some discussion > > about adding support for PATCH in Rails 3.2. If you read the > > discussion, there's a lot of controversy over the semantics of PUT and > > partial updates, discussion about if and when PATCH will become more > > than a draft standard, and all kinds of fun stuff. > > > > I thought this discussion might be relevant to this community. > > > > - Steve > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > >
On Thu, Dec 15, 2011 at 9:14 AM, Jan Algermissen <jan.algermissen@...m> wrote: > > On Dec 15, 2011, at 1:42 AM, Mike Kelly wrote: > >> I created a question on Stack Overflow about this a while ago: >> >> http://stackoverflow.com/questions/2364110/whats-the-justification-behind-disallowing-partial-put >> >> I still don't really understand the benefit of not allowing PUT to be >> partial, > > So you are asking, why PUT was defined as idempotent in the first place, yes? > > I think the reason is sort of "because we can define it that way". There is POST, which has no visibility (==POST is meaningless to an intermediary) and everything could just be done with POST. But then, adding methods that *have* visibility adds some serious capabilities to HTTP. E.g. GET's semantics allow for caching and it is also very helpful that we know that GET is safe - we can call it any number of times. > .. and PUTs 'complete replace' semantics allow for.. ? > You could do updates with POST, but then, full updates have the inherent property of being idempotent so it makes sense to define a method for that, leveraging the idempotency to the protocol level. Same for DELETE. For example, no a cache can mark copies it has of a response as stale upon a corresponding response to a PUT. PUT's idempotency is also a win over just using POST. > > If the interactions you design match the semantics of PUT, use it (for its added visibility), if they do not match, just use POST. Again.. the point of the question is to try and fathom what the 'guaranteed completeness' of PUT requests actually contributes in terms of visibility on the web. i.e. what intermediary mechanisms rely on the completeness of a PUT? > Redefining PUT to mean 'partial update without idempotency' is no win because 'partial update without idempotency' does not give an intermediary any visibility. You could just use POST in the first place. > > Does that help? > No it doesn't. POST is a non-idempotent action; using it for an _intentionally idempotent partial update_ complicates the interaction from the client's perspective - what if the request fails? Maybe a better way of approaching this might be for someone to demonstrate what supposed "non-compliant" implementations that allow partial PUT (e.g. Rails) lose as a result of not following the suposed "worthwhile" no-partials-allowed semantics of PUT? Cheers, Mike
On Dec 15, 2011, at 11:02 AM, Mike Kelly wrote: > On Thu, Dec 15, 2011 at 9:14 AM, Jan Algermissen > <jan.algermissen@...> wrote: >> >> On Dec 15, 2011, at 1:42 AM, Mike Kelly wrote: >> >>> I created a question on Stack Overflow about this a while ago: >>> >>> http://stackoverflow.com/questions/2364110/whats-the-justification-behind-disallowing-partial-put >>> >>> I still don't really understand the benefit of not allowing PUT to be >>> partial, >> >> So you are asking, why PUT was defined as idempotent in the first place, yes? >> >> I think the reason is sort of "because we can define it that way". There is POST, which has no visibility (==POST is meaningless to an intermediary) and everything could just be done with POST. But then, adding methods that *have* visibility adds some serious capabilities to HTTP. E.g. GET's semantics allow for caching and it is also very helpful that we know that GET is safe - we can call it any number of times. >> > > .. and PUTs 'complete replace' semantics allow for.. ? > >> You could do updates with POST, but then, full updates have the inherent property of being idempotent so it makes sense to define a method for that, leveraging the idempotency to the protocol level. Same for DELETE. For example, no a cache can mark copies it has of a response as stale upon a corresponding response to a PUT. PUT's idempotency is also a win over just using POST. >> >> If the interactions you design match the semantics of PUT, use it (for its added visibility), if they do not match, just use POST. > > Again.. the point of the question is to try and fathom what the > 'guaranteed completeness' of PUT requests actually contributes in > terms of visibility on the web. i.e. what intermediary mechanisms rely > on the completeness of a PUT? Huh? No, it is the other way round: *partial* updates cannot be guaranteed to be idempotent. It is the idempotency we want for the sake of greater visibility (compared to POST). And we can only specify idempotent update semantics if the updates are replaces. (As to why partial updates cannot be guaranteed to be idempotent, see my answer in the mentioned SO question). > >> Redefining PUT to mean 'partial update without idempotency' is no win because 'partial update without idempotency' does not give an intermediary any visibility. You could just use POST in the first place. >> >> Does that help? >> > > No it doesn't. POST is a non-idempotent action; using it for an > _intentionally idempotent partial update_ See - this is a contradiction. How would you prevent people to define media types that lead to non-idempotency in partial updates? Remember that the method semantics must be orthogonal to the media type semantics. Itempotent partial updates depend on the specific media type used and hence you cannot specify a method that alone has these idempotent partial update semantics. (This is also the reason why PATH cannot be idempotent, never). Jan > complicates the interaction > from the client's perspective - what if the request fails? > > > Maybe a better way of approaching this might be for someone to > demonstrate what supposed "non-compliant" implementations that allow > partial PUT (e.g. Rails) lose as a result of not following the suposed > "worthwhile" no-partials-allowed semantics of PUT? > > Cheers, > Mike
What is it specifically, in practice, that non-compliant implementations like Rails lose by 'doing it wrong' and allowing partial PUTs? On Thu, Dec 15, 2011 at 10:36 AM, Jan Algermissen <jan.algermissen@...> wrote: > > On Dec 15, 2011, at 11:02 AM, Mike Kelly wrote: > >> On Thu, Dec 15, 2011 at 9:14 AM, Jan Algermissen >> <jan.algermissen@...> wrote: >>> >>> On Dec 15, 2011, at 1:42 AM, Mike Kelly wrote: >>> >>>> I created a question on Stack Overflow about this a while ago: >>>> >>>> http://stackoverflow.com/questions/2364110/whats-the-justification-behind-disallowing-partial-put >>>> >>>> I still don't really understand the benefit of not allowing PUT to be >>>> partial, >>> >>> So you are asking, why PUT was defined as idempotent in the first place, yes? >>> >>> I think the reason is sort of "because we can define it that way". There is POST, which has no visibility (==POST is meaningless to an intermediary) and everything could just be done with POST. But then, adding methods that *have* visibility adds some serious capabilities to HTTP. E.g. GET's semantics allow for caching and it is also very helpful that we know that GET is safe - we can call it any number of times. >>> >> >> .. and PUTs 'complete replace' semantics allow for.. ? >> >>> You could do updates with POST, but then, full updates have the inherent property of being idempotent so it makes sense to define a method for that, leveraging the idempotency to the protocol level. Same for DELETE. For example, no a cache can mark copies it has of a response as stale upon a corresponding response to a PUT. PUT's idempotency is also a win over just using POST. >>> >>> If the interactions you design match the semantics of PUT, use it (for its added visibility), if they do not match, just use POST. >> >> Again.. the point of the question is to try and fathom what the >> 'guaranteed completeness' of PUT requests actually contributes in >> terms of visibility on the web. i.e. what intermediary mechanisms rely >> on the completeness of a PUT? > > Huh? No, it is the other way round: *partial* updates cannot be guaranteed to be idempotent. It is the idempotency we want for the sake of greater visibility (compared to POST). And we can only specify idempotent update semantics if the updates are replaces. > > (As to why partial updates cannot be guaranteed to be idempotent, see my answer in the mentioned SO question). > > > > >> >>> Redefining PUT to mean 'partial update without idempotency' is no win because 'partial update without idempotency' does not give an intermediary any visibility. You could just use POST in the first place. >>> >>> Does that help? >>> >> >> No it doesn't. POST is a non-idempotent action; using it for an >> _intentionally idempotent partial update_ > > See - this is a contradiction. How would you prevent people to define media types that lead to non-idempotency in partial updates? Remember that the method semantics must be orthogonal to the media type semantics. > > Itempotent partial updates depend on the specific media type used and hence you cannot specify a method that alone has these idempotent partial update semantics. > > (This is also the reason why PATH cannot be idempotent, never). > > Jan > >> complicates the interaction >> from the client's perspective - what if the request fails? >> >> >> Maybe a better way of approaching this might be for someone to >> demonstrate what supposed "non-compliant" implementations that allow >> partial PUT (e.g. Rails) lose as a result of not following the suposed >> "worthwhile" no-partials-allowed semantics of PUT? >> >> Cheers, >> Mike >
On Dec 15, 2011, at 11:54 AM, Mike Kelly wrote: > What is it specifically, in practice, that non-compliant > implementations like Rails lose by 'doing it wrong' and allowing > partial PUTs? When a client (or intermediary, for that matter) re-does a PUT N-times (e.g. because it did not receive any response the first N-1 times due to network problems) the result on the server might not be what the client assumes it is, given the idempotent nature of PUT. A client that is aware of the server's tunneling-partial-update-through-PUT semantics might not redo the PUT but any intermediary in between might (because they would not be aware of the out-of-band knowledge / just like Google accelerator in the case of GET-to-delete-account). What is the problem of just using POST for the partial update in the first place? This is what POST is for. Jan > > On Thu, Dec 15, 2011 at 10:36 AM, Jan Algermissen > <jan.algermissen@...> wrote: >> >> On Dec 15, 2011, at 11:02 AM, Mike Kelly wrote: >> >>> On Thu, Dec 15, 2011 at 9:14 AM, Jan Algermissen >>> <jan.algermissen@...> wrote: >>>> >>>> On Dec 15, 2011, at 1:42 AM, Mike Kelly wrote: >>>> >>>>> I created a question on Stack Overflow about this a while ago: >>>>> >>>>> http://stackoverflow.com/questions/2364110/whats-the-justification-behind-disallowing-partial-put >>>>> >>>>> I still don't really understand the benefit of not allowing PUT to be >>>>> partial, >>>> >>>> So you are asking, why PUT was defined as idempotent in the first place, yes? >>>> >>>> I think the reason is sort of "because we can define it that way". There is POST, which has no visibility (==POST is meaningless to an intermediary) and everything could just be done with POST. But then, adding methods that *have* visibility adds some serious capabilities to HTTP. E.g. GET's semantics allow for caching and it is also very helpful that we know that GET is safe - we can call it any number of times. >>>> >>> >>> .. and PUTs 'complete replace' semantics allow for.. ? >>> >>>> You could do updates with POST, but then, full updates have the inherent property of being idempotent so it makes sense to define a method for that, leveraging the idempotency to the protocol level. Same for DELETE. For example, no a cache can mark copies it has of a response as stale upon a corresponding response to a PUT. PUT's idempotency is also a win over just using POST. >>>> >>>> If the interactions you design match the semantics of PUT, use it (for its added visibility), if they do not match, just use POST. >>> >>> Again.. the point of the question is to try and fathom what the >>> 'guaranteed completeness' of PUT requests actually contributes in >>> terms of visibility on the web. i.e. what intermediary mechanisms rely >>> on the completeness of a PUT? >> >> Huh? No, it is the other way round: *partial* updates cannot be guaranteed to be idempotent. It is the idempotency we want for the sake of greater visibility (compared to POST). And we can only specify idempotent update semantics if the updates are replaces. >> >> (As to why partial updates cannot be guaranteed to be idempotent, see my answer in the mentioned SO question). >> >> >> >> >>> >>>> Redefining PUT to mean 'partial update without idempotency' is no win because 'partial update without idempotency' does not give an intermediary any visibility. You could just use POST in the first place. >>>> >>>> Does that help? >>>> >>> >>> No it doesn't. POST is a non-idempotent action; using it for an >>> _intentionally idempotent partial update_ >> >> See - this is a contradiction. How would you prevent people to define media types that lead to non-idempotency in partial updates? Remember that the method semantics must be orthogonal to the media type semantics. >> >> Itempotent partial updates depend on the specific media type used and hence you cannot specify a method that alone has these idempotent partial update semantics. >> >> (This is also the reason why PATH cannot be idempotent, never). >> >> Jan >> >>> complicates the interaction >>> from the client's perspective - what if the request fails? >>> >>> >>> Maybe a better way of approaching this might be for someone to >>> demonstrate what supposed "non-compliant" implementations that allow >>> partial PUT (e.g. Rails) lose as a result of not following the suposed >>> "worthwhile" no-partials-allowed semantics of PUT? >>> >>> Cheers, >>> Mike >>
The problem is, that using PUT for partials is a widely used anti pattern. this may sound weird, but maybe it is time to redefine (ease) the HTTP specs to match to the reality...? If major frameworks like e.g. rails use this anti pattern, how can any intermediary rely on the intended semantics of PUT nowadays? Jakob --- In rest-discuss@yahoogroups.com, Jan Algermissen <jan.algermissen@...> wrote: > > > On Dec 15, 2011, at 11:54 AM, Mike Kelly wrote: > > > What is it specifically, in practice, that non-compliant > > implementations like Rails lose by 'doing it wrong' and allowing > > partial PUTs? > > When a client (or intermediary, for that matter) re-does a PUT N-times (e.g. because it did not receive any response the first N-1 times due to network problems) the result on the server might not be what the client assumes it is, given the idempotent nature of PUT. > > A client that is aware of the server's tunneling-partial-update-through-PUT semantics might not redo the PUT but any intermediary in between might (because they would not be aware of the out-of-band knowledge / just like Google accelerator in the case of GET-to-delete-account). > > What is the problem of just using POST for the partial update in the first place? This is what POST is for. > > Jan > > > > > > > On Thu, Dec 15, 2011 at 10:36 AM, Jan Algermissen > > <jan.algermissen@...> wrote: > >> > >> On Dec 15, 2011, at 11:02 AM, Mike Kelly wrote: > >> > >>> On Thu, Dec 15, 2011 at 9:14 AM, Jan Algermissen > >>> <jan.algermissen@...> wrote: > >>>> > >>>> On Dec 15, 2011, at 1:42 AM, Mike Kelly wrote: > >>>> > >>>>> I created a question on Stack Overflow about this a while ago: > >>>>> > >>>>> http://stackoverflow.com/questions/2364110/whats-the-justification-behind-disallowing-partial-put > >>>>> > >>>>> I still don't really understand the benefit of not allowing PUT to be > >>>>> partial, > >>>> > >>>> So you are asking, why PUT was defined as idempotent in the first place, yes? > >>>> > >>>> I think the reason is sort of "because we can define it that way". There is POST, which has no visibility (==POST is meaningless to an intermediary) and everything could just be done with POST. But then, adding methods that *have* visibility adds some serious capabilities to HTTP. E.g. GET's semantics allow for caching and it is also very helpful that we know that GET is safe - we can call it any number of times. > >>>> > >>> > >>> .. and PUTs 'complete replace' semantics allow for.. ? > >>> > >>>> You could do updates with POST, but then, full updates have the inherent property of being idempotent so it makes sense to define a method for that, leveraging the idempotency to the protocol level. Same for DELETE. For example, no a cache can mark copies it has of a response as stale upon a corresponding response to a PUT. PUT's idempotency is also a win over just using POST. > >>>> > >>>> If the interactions you design match the semantics of PUT, use it (for its added visibility), if they do not match, just use POST. > >>> > >>> Again.. the point of the question is to try and fathom what the > >>> 'guaranteed completeness' of PUT requests actually contributes in > >>> terms of visibility on the web. i.e. what intermediary mechanisms rely > >>> on the completeness of a PUT? > >> > >> Huh? No, it is the other way round: *partial* updates cannot be guaranteed to be idempotent. It is the idempotency we want for the sake of greater visibility (compared to POST). And we can only specify idempotent update semantics if the updates are replaces. > >> > >> (As to why partial updates cannot be guaranteed to be idempotent, see my answer in the mentioned SO question). > >> > >> > >> > >> > >>> > >>>> Redefining PUT to mean 'partial update without idempotency' is no win because 'partial update without idempotency' does not give an intermediary any visibility. You could just use POST in the first place. > >>>> > >>>> Does that help? > >>>> > >>> > >>> No it doesn't. POST is a non-idempotent action; using it for an > >>> _intentionally idempotent partial update_ > >> > >> See - this is a contradiction. How would you prevent people to define media types that lead to non-idempotency in partial updates? Remember that the method semantics must be orthogonal to the media type semantics. > >> > >> Itempotent partial updates depend on the specific media type used and hence you cannot specify a method that alone has these idempotent partial update semantics. > >> > >> (This is also the reason why PATH cannot be idempotent, never). > >> > >> Jan > >> > >>> complicates the interaction > >>> from the client's perspective - what if the request fails? > >>> > >>> > >>> Maybe a better way of approaching this might be for someone to > >>> demonstrate what supposed "non-compliant" implementations that allow > >>> partial PUT (e.g. Rails) lose as a result of not following the suposed > >>> "worthwhile" no-partials-allowed semantics of PUT? > >>> > >>> Cheers, > >>> Mike > >> >
Mostly because the whole web works that way. The fact that the guys that built Rails were implementing rest badly (and calling it restful routes or whatever) won't be enough of a leverage to change the whole web. neither was it for people that did non-safe operations on Get. Eventually, people learn and use HTTP the way it's implemented out there and stop breaking things and everybody lives happily after that. As for "fixing" it, indeed there is nothing to be fixed, there's already POST and PATCH, the stuff that needs fixing is rails. Seb ________________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Jakob Strauch [jakob.strauch@...] Sent: 15 December 2011 19:54 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Re: Rails 3.2 and PATCH The problem is, that using PUT for partials is a widely used anti pattern. this may sound weird, but maybe it is time to redefine (ease) the HTTP specs to match to the reality...? If major frameworks like e.g. rails use this anti pattern, how can any intermediary rely on the intended semantics of PUT nowadays? Jakob --- In rest-discuss@yahoogroups.com, Jan Algermissen <jan.algermissen@...> wrote: > > > On Dec 15, 2011, at 11:54 AM, Mike Kelly wrote: > > > What is it specifically, in practice, that non-compliant > > implementations like Rails lose by 'doing it wrong' and allowing > > partial PUTs? > > When a client (or intermediary, for that matter) re-does a PUT N-times (e.g. because it did not receive any response the first N-1 times due to network problems) the result on the server might not be what the client assumes it is, given the idempotent nature of PUT. > > A client that is aware of the server's tunneling-partial-update-through-PUT semantics might not redo the PUT but any intermediary in between might (because they would not be aware of the out-of-band knowledge / just like Google accelerator in the case of GET-to-delete-account). > > What is the problem of just using POST for the partial update in the first place? This is what POST is for. > > Jan > > > > > > > On Thu, Dec 15, 2011 at 10:36 AM, Jan Algermissen > > <jan.algermissen@...> wrote: > >> > >> On Dec 15, 2011, at 11:02 AM, Mike Kelly wrote: > >> > >>> On Thu, Dec 15, 2011 at 9:14 AM, Jan Algermissen > >>> <jan.algermissen@...> wrote: > >>>> > >>>> On Dec 15, 2011, at 1:42 AM, Mike Kelly wrote: > >>>> > >>>>> I created a question on Stack Overflow about this a while ago: > >>>>> > >>>>> http://stackoverflow.com/questions/2364110/whats-the-justification-behind-disallowing-partial-put > >>>>> > >>>>> I still don't really understand the benefit of not allowing PUT to be > >>>>> partial, > >>>> > >>>> So you are asking, why PUT was defined as idempotent in the first place, yes? > >>>> > >>>> I think the reason is sort of "because we can define it that way". There is POST, which has no visibility (==POST is meaningless to an intermediary) and everything could just be done with POST. But then, adding methods that *have* visibility adds some serious capabilities to HTTP. E.g. GET's semantics allow for caching and it is also very helpful that we know that GET is safe - we can call it any number of times. > >>>> > >>> > >>> .. and PUTs 'complete replace' semantics allow for.. ? > >>> > >>>> You could do updates with POST, but then, full updates have the inherent property of being idempotent so it makes sense to define a method for that, leveraging the idempotency to the protocol level. Same for DELETE. For example, no a cache can mark copies it has of a response as stale upon a corresponding response to a PUT. PUT's idempotency is also a win over just using POST. > >>>> > >>>> If the interactions you design match the semantics of PUT, use it (for its added visibility), if they do not match, just use POST. > >>> > >>> Again.. the point of the question is to try and fathom what the > >>> 'guaranteed completeness' of PUT requests actually contributes in > >>> terms of visibility on the web. i.e. what intermediary mechanisms rely > >>> on the completeness of a PUT? > >> > >> Huh? No, it is the other way round: *partial* updates cannot be guaranteed to be idempotent. It is the idempotency we want for the sake of greater visibility (compared to POST). And we can only specify idempotent update semantics if the updates are replaces. > >> > >> (As to why partial updates cannot be guaranteed to be idempotent, see my answer in the mentioned SO question). > >> > >> > >> > >> > >>> > >>>> Redefining PUT to mean 'partial update without idempotency' is no win because 'partial update without idempotency' does not give an intermediary any visibility. You could just use POST in the first place. > >>>> > >>>> Does that help? > >>>> > >>> > >>> No it doesn't. POST is a non-idempotent action; using it for an > >>> _intentionally idempotent partial update_ > >> > >> See - this is a contradiction. How would you prevent people to define media types that lead to non-idempotency in partial updates? Remember that the method semantics must be orthogonal to the media type semantics. > >> > >> Itempotent partial updates depend on the specific media type used and hence you cannot specify a method that alone has these idempotent partial update semantics. > >> > >> (This is also the reason why PATH cannot be idempotent, never). > >> > >> Jan > >> > >>> complicates the interaction > >>> from the client's perspective - what if the request fails? > >>> > >>> > >>> Maybe a better way of approaching this might be for someone to > >>> demonstrate what supposed "non-compliant" implementations that allow > >>> partial PUT (e.g. Rails) lose as a result of not following the suposed > >>> "worthwhile" no-partials-allowed semantics of PUT? > >>> > >>> Cheers, > >>> Mike > >> > ------------------------------------ Yahoo! Groups Links
On 2011-12-14 23:40, Steve Klabnik wrote: > Hey everyone! > > I just wanted to draw attention to the discussion here: > https://github.com/rails/rails/pull/505 > > While the pull request is originally from May, there's some discussion > about adding support for PATCH in Rails 3.2. If you read the > discussion, there's a lot of controversy over the semantics of PUT and > partial updates, discussion about if and when PATCH will become more > than a draft standard, and all kinds of fun stuff. > ... PATCH will never become a "draft" standard, as the IETF just got rid of this standards level. PUT, defined in RFC 2616 and being revised in HTTPbis, will go back to "Proposed", soon, btw. (*) Best regards, Julian PS: because "Draft" is gone, and we can't go to "full standard" given the amount of changes we're making.
On Thu, Dec 15, 2011 at 16:16, Julian Reschke <julian.reschke@...> wrote: > On 2011-12-14 23:40, Steve Klabnik wrote: >> Hey everyone! >> >> I just wanted to draw attention to the discussion here: >> https://github.com/rails/rails/pull/505 >> >> While the pull request is originally from May, there's some discussion >> about adding support for PATCH in Rails 3.2. If you read the >> discussion, there's a lot of controversy over the semantics of PUT and >> partial updates, discussion about if and when PATCH will become more >> than a draft standard, and all kinds of fun stuff. >> ... > > PATCH will never become a "draft" standard, as the IETF just got rid of > this standards level. > > PUT, defined in RFC 2616 and being revised in HTTPbis, will go back to > "Proposed", soon, btw. (*) > > Best regards, Julian > > PS: because "Draft" is gone, and we can't go to "full standard" given > the amount of changes we're making. > RFOL
On Thu, Dec 15, 2011 at 8:36 PM, Sebastien Lambla <seb@...> wrote: > Mostly because the whole web works that way. Clearly, this is not true. Hence the topic of conversation. >The fact that the guys that built Rails were implementing rest badly (and calling it restful routes or whatever) With respect, that is complete nonsense. Rails' routing DSL is actually extremely flexible, and significantly more sophisticated than most alternatives. What exactly are you trying to do with HTTP that you can't do with Rails? > won't be enough of a leverage to change the whole web. neither was it for people that did non-safe operations on Get. That is not a sensible comparison: Using GET for non-safe operations is obviously wrong, as the benefit of GET being safe across the web is clear. On the other hand, the benefit to having guaranteed fullness of a PUT request across the web is _not at all clear_. > Eventually, people learn and use HTTP the way it's implemented out there and stop breaking things and everybody lives happily after that. So what is the _actual concern_ you have about partial PUT that's preventing you from living happily ever after? What is it that's actually being broken? > As for "fixing" it, indeed there is nothing to be fixed, there's already POST and PATCH, the stuff that needs fixing is rails. Neither of those fixes would actually work in practice - you can't just swap out partial PUT for POST or PATCH because neither of them are idempotent and therefore would require clients to remodel how they deal with failed requests.
On 2011-12-15 21:36, Sebastien Lambla wrote: > Mostly because the whole web works that way. The fact that the guys that > built Rails were implementing rest badly (and calling it restful routes > or whatever) won't be enough of a leverage to change the whole web. > neither was it for people that did non-safe operations on Get. > > Eventually, people learn and use HTTP the way it's implemented out there > and stop breaking things and everybody lives happily after that. > > As for "fixing" it, indeed there is nothing to be fixed, there's already > POST and PATCH, the stuff that needs fixing is rails. > ... That being said... 1) See <http://trac.tools.ietf.org/wg/httpbis/trac/ticket/267>. The new text in HTTPbis is: ".... Partial content updates are possible by targeting a separately identified resource with state that overlaps a portion of the larger resource, or by using a different method that has been specifically defined for partial updates (for example, the PATCH method defined in [RFC5789])." 2) Also: just replacing PUT with PATCH is not sufficient. The Content-Type of the PATCH requests describes the patch format, not the format being patched. Best regards, Julian
Separately identified resource being the keyword here. -- Sebastien Lambla On 15 Dec 2011, at 22:03, "Julian Reschke" <julian.reschke@...> wrote: > On 2011-12-15 21:36, Sebastien Lambla wrote: >> Mostly because the whole web works that way. The fact that the guys that >> built Rails were implementing rest badly (and calling it restful routes >> or whatever) won't be enough of a leverage to change the whole web. >> neither was it for people that did non-safe operations on Get. >> >> Eventually, people learn and use HTTP the way it's implemented out there >> and stop breaking things and everybody lives happily after that. >> >> As for "fixing" it, indeed there is nothing to be fixed, there's already >> POST and PATCH, the stuff that needs fixing is rails. >> ... > > > That being said... > > 1) See <http://trac.tools.ietf.org/wg/httpbis/trac/ticket/267>. The new text in HTTPbis is: > > ".... Partial content updates are possible by targeting a separately identified resource with state that overlaps a portion of the larger resource, or by using a different method that has been specifically defined for partial updates (for example, the PATCH method defined in [RFC5789])." > > 2) Also: just replacing PUT with PATCH is not sufficient. The Content-Type of the PATCH requests describes the patch format, not the format being patched. > > Best regards, Julian
I tend to not dig into conversations that involve the word nonsense. -- Sebastien Lambla On 15 Dec 2011, at 22:01, "Mike Kelly" <mike@....uk> wrote: > On Thu, Dec 15, 2011 at 8:36 PM, Sebastien Lambla <seb@serialseb.com> wrote: >> Mostly because the whole web works that way. > > Clearly, this is not true. Hence the topic of conversation. > >> The fact that the guys that built Rails were implementing rest badly (and calling it restful routes or whatever) > > With respect, that is complete nonsense. Rails' routing DSL is > actually extremely flexible, and significantly more sophisticated than > most alternatives. What exactly are you trying to do with HTTP that > you can't do with Rails? > >> won't be enough of a leverage to change the whole web. neither was it for people that did non-safe operations on Get. > > That is not a sensible comparison: > > Using GET for non-safe operations is obviously wrong, as the benefit > of GET being safe across the web is clear. On the other hand, the > benefit to having guaranteed fullness of a PUT request across the web > is _not at all clear_. > >> Eventually, people learn and use HTTP the way it's implemented out there and stop breaking things and everybody lives happily after that. > > So what is the _actual concern_ you have about partial PUT that's > preventing you from living happily ever after? What is it that's > actually being broken? > >> As for "fixing" it, indeed there is nothing to be fixed, there's already POST and PATCH, the stuff that needs fixing is rails. > > Neither of those fixes would actually work in practice - you can't > just swap out partial PUT for POST or PATCH because neither of them > are idempotent and therefore would require clients to remodel how they > deal with failed requests.
On Dec 16, 2011, at 12:37 AM, Sebastien Lambla wrote: > I tend to not dig into conversations that involve the word nonsense. Amen to that. The thing really is that there is no problem to be fixed. Jan > > -- > Sebastien Lambla > > On 15 Dec 2011, at 22:01, "Mike Kelly" <mike@....uk> wrote: > > > On Thu, Dec 15, 2011 at 8:36 PM, Sebastien Lambla <seb@serialseb.com> wrote: > >> Mostly because the whole web works that way. > > > > Clearly, this is not true. Hence the topic of conversation. > > > >> The fact that the guys that built Rails were implementing rest badly (and calling it restful routes or whatever) > > > > With respect, that is complete nonsense. Rails' routing DSL is > > actually extremely flexible, and significantly more sophisticated than > > most alternatives. What exactly are you trying to do with HTTP that > > you can't do with Rails? > > > >> won't be enough of a leverage to change the whole web. neither was it for people that did non-safe operations on Get. > > > > That is not a sensible comparison: > > > > Using GET for non-safe operations is obviously wrong, as the benefit > > of GET being safe across the web is clear. On the other hand, the > > benefit to having guaranteed fullness of a PUT request across the web > > is _not at all clear_. > > > >> Eventually, people learn and use HTTP the way it's implemented out there and stop breaking things and everybody lives happily after that. > > > > So what is the _actual concern_ you have about partial PUT that's > > preventing you from living happily ever after? What is it that's > > actually being broken? > > > >> As for "fixing" it, indeed there is nothing to be fixed, there's already POST and PATCH, the stuff that needs fixing is rails. > > > > Neither of those fixes would actually work in practice - you can't > > just swap out partial PUT for POST or PATCH because neither of them > > are idempotent and therefore would require clients to remodel how they > > deal with failed requests. >
I thought PATCH is already standardized!? See http://tools.ietf.org/html/rfc5789 The other thing that confuses me now is what advantages has the PATCH method compared to POST? Both are neither safe nor idempotent. In regard to POST the above RFC just says that "POST is already used but without broad interoperability (for one, there is no standard way to discover patch format support). -- Markus Lanthaler @markuslanthaler
On 2011-12-15 19:21 , Markus Lanthaler wrote: > The other thing that confuses me now is what advantages has the PATCH method > compared to POST? Both are neither safe nor idempotent. In regard to POST > the above RFC just says that "POST is already used but without broad > interoperability (for one, there is no standard way to discover patch format > support). that's pretty much it, afaict. the PATCH media type is the diff format, and not the media type of the resources involved. so you could make explicit that you are, for example, using a specific XML diff format (if any of those ever bothered to register a media type...), and peers could negotiate which formats they expect and support. cheers, dret.
On Dec 16, 2011, at 4:24 AM, Erik Wilde wrote: > On 2011-12-15 19:21 , Markus Lanthaler wrote: > > The other thing that confuses me now is what advantages has the PATCH method > > compared to POST? Both are neither safe nor idempotent. In regard to POST > > the above RFC just says that "POST is already used but without broad > > interoperability (for one, there is no standard way to discover patch format > > support). > > that's pretty much it, afaict. the PATCH media type is the diff format, > and not the media type of the resources involved. so you could make > explicit that you are, for example, using a specific XML diff format (if > any of those ever bothered to register a media type...), and peers could > negotiate which formats they expect and support. From the spec at http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.6 : "The fundamental difference between the POST and PUT requests is reflected in the different meaning of the Request-URI....". PATCH vs POST is similar, with e.g. the consequence that the additional visibility of PATCH allows caches to mark copies of a resource as stale if a PATCH to them succeeded. Jan > > cheers, > > dret. >
non·sense /ˈnänˌsens/ Noun:Words that make no sense. "The fact that the guys that built Rails were implementing rest badly" 1. That is not a fact. It is your opinion. 2. The phrase 'implementing rest' makes no sense. Please just let it go and get back to the point, I asked you several legitimate questions. On Thu, Dec 15, 2011 at 11:37 PM, Sebastien Lambla <seb@...> wrote: > I tend to not dig into conversations that involve the word nonsense. > > -- > Sebastien Lambla > > > On 15 Dec 2011, at 22:01, "Mike Kelly" <mike@...> wrote: > >> On Thu, Dec 15, 2011 at 8:36 PM, Sebastien Lambla <seb@...> wrote: >>> Mostly because the whole web works that way. >> >> Clearly, this is not true. Hence the topic of conversation. >> >>> The fact that the guys that built Rails were implementing rest badly (and calling it restful routes or whatever) >> >> With respect, that is complete nonsense. Rails' routing DSL is >> actually extremely flexible, and significantly more sophisticated than >> most alternatives. What exactly are you trying to do with HTTP that >> you can't do with Rails? >> >>> won't be enough of a leverage to change the whole web. neither was it for people that did non-safe operations on Get. >> >> That is not a sensible comparison: >> >> Using GET for non-safe operations is obviously wrong, as the benefit >> of GET being safe across the web is clear. On the other hand, the >> benefit to having guaranteed fullness of a PUT request across the web >> is _not at all clear_. >> >>> Eventually, people learn and use HTTP the way it's implemented out there and stop breaking things and everybody lives happily after that. >> >> So what is the _actual concern_ you have about partial PUT that's >> preventing you from living happily ever after? What is it that's >> actually being broken? >> >>> As for "fixing" it, indeed there is nothing to be fixed, there's already POST and PATCH, the stuff that needs fixing is rails. >> >> Neither of those fixes would actually work in practice - you can't >> just swap out partial PUT for POST or PATCH because neither of them >> are idempotent and therefore would require clients to remodel how they >> deal with failed requests.
On Fri, Dec 16, 2011 at 12:21 AM, Jan Algermissen <jan.algermissen@...> wrote: > > On Dec 16, 2011, at 12:37 AM, Sebastien Lambla wrote: > >> I tend to not dig into conversations that involve the word nonsense. > > Amen to that. ... [redacted]. > The thing really is that there is no problem to be fixed. Precisely. So let's just update the HTTP spec to reflect the reality that partial PUTs are used in practice and have caused no problems. Thanks, Mike
 Mike Kelly <mike@...> hat am 16. Dezember 2011 um 10:02 geschrieben: > > > So let's just update the HTTP spec to reflect the reality that partial > PUTs are used in practice and have caused no problems. Again: There is no need to change any spec because the problem you are trying to solve simply does not exist.  Just use POST (or PATCH if you want the cache invalidation benefit) for partial updates and PUT for complete updates.   And just to repeat: Idempotent partial updates are impossible semantics for HTTP methods because the idempotency of partial updates depends on the media type used.   Jan     > > Thanks, > Mike
On 2011-12-16 00:35, Sebastien Lambla wrote: > Separately identified resource being the keyword here. "using a different method that has been specifically defined for partial updates" being the other. > -- > Sebastien Lambla > > On 15 Dec 2011, at 22:03, "Julian Reschke" <julian.reschke@... > <mailto:julian.reschke%40gmx.de>> wrote: > > > On 2011-12-15 21:36, Sebastien Lambla wrote: > >> Mostly because the whole web works that way. The fact that the guys that > >> built Rails were implementing rest badly (and calling it restful routes > >> or whatever) won't be enough of a leverage to change the whole web. > >> neither was it for people that did non-safe operations on Get. > >> > >> Eventually, people learn and use HTTP the way it's implemented out there > >> and stop breaking things and everybody lives happily after that. > >> > >> As for "fixing" it, indeed there is nothing to be fixed, there's already > >> POST and PATCH, the stuff that needs fixing is rails. > >> ... > > > > > > That being said... > > > > 1) See <http://trac.tools.ietf.org/wg/httpbis/trac/ticket/267>. The > new text in HTTPbis is: > > > > ".... Partial content updates are possible by targeting a separately > identified resource with state that overlaps a portion of the larger > resource, or by using a different method that has been specifically > defined for partial updates (for example, the PATCH method defined in > [RFC5789])." > > > > 2) Also: just replacing PUT with PATCH is not sufficient. The > Content-Type of the PATCH requests describes the patch format, not the > format being patched. > > > > Best regards, Julian > >
On 2011-12-16 04:21, Markus Lanthaler wrote: > I thought PATCH is already standardized!? > See http://tools.ietf.org/html/rfc5789 Yes. But the IETF has different standardization levels. > ... Best regards, Julian
Mike, With all due respect I find most of my conversations with you to usually lead to no positive outcome, so I'd rather not enter yet another of those debates. Plus Jan is already on it and I have little to add for now. -- Sebastien Lambla On 16 Dec 2011, at 08:40, "Mike Kelly" <mike@mykanjo.co.uk> wrote: > non·sense /ˈnänˌsens/ > Noun:Words that make no sense. > > "The fact that the guys that built Rails were implementing rest badly" > > 1. That is not a fact. It is your opinion. > 2. The phrase 'implementing rest' makes no sense. > > Please just let it go and get back to the point, I asked you several > legitimate questions. > > On Thu, Dec 15, 2011 at 11:37 PM, Sebastien Lambla <seb@serialseb.com> wrote: >> I tend to not dig into conversations that involve the word nonsense. >> >> -- >> Sebastien Lambla >> >> >> On 15 Dec 2011, at 22:01, "Mike Kelly" <mike@mykanjo.co.uk> wrote: >> >>> On Thu, Dec 15, 2011 at 8:36 PM, Sebastien Lambla <seb@serialseb.com> wrote: >>>> Mostly because the whole web works that way. >>> >>> Clearly, this is not true. Hence the topic of conversation. >>> >>>> The fact that the guys that built Rails were implementing rest badly (and calling it restful routes or whatever) >>> >>> With respect, that is complete nonsense. Rails' routing DSL is >>> actually extremely flexible, and significantly more sophisticated than >>> most alternatives. What exactly are you trying to do with HTTP that >>> you can't do with Rails? >>> >>>> won't be enough of a leverage to change the whole web. neither was it for people that did non-safe operations on Get. >>> >>> That is not a sensible comparison: >>> >>> Using GET for non-safe operations is obviously wrong, as the benefit >>> of GET being safe across the web is clear. On the other hand, the >>> benefit to having guaranteed fullness of a PUT request across the web >>> is _not at all clear_. >>> >>>> Eventually, people learn and use HTTP the way it's implemented out there and stop breaking things and everybody lives happily after that. >>> >>> So what is the _actual concern_ you have about partial PUT that's >>> preventing you from living happily ever after? What is it that's >>> actually being broken? >>> >>>> As for "fixing" it, indeed there is nothing to be fixed, there's already POST and PATCH, the stuff that needs fixing is rails. >>> >>> Neither of those fixes would actually work in practice - you can't >>> just swap out partial PUT for POST or PATCH because neither of them >>> are idempotent and therefore would require clients to remodel how they >>> deal with failed requests.
On Fri, Dec 16, 2011 at 9:30 AM, Sebastien Lambla <seb@...> wrote: > Mike, > > With all due respect I find most of my conversations with you to usually lead to no positive outcome, so I'd rather not enter yet another of those debates. Plus Jan is already on it and I have little to add for now. > That being the case, why did you bother announcing you were reluctant based on my use of "nonsense"? .. yet more nonsense. If you want 'positive outcomes', it's probably a good idea that you don't present derogatory opinion about other people's hard work as "fact". If you must do that and you get called on it then don't act surprised, and at least have the decency to either back it up or retract it. Jan was already 'on it' before, and that didn't seem to bother you too much - it looks a bit like you're just back-peddling. Either answer the questions addressed to you, or stop replying.
I have stopped replying, not due to my lack of answers (I was willing to provide more feedback on real-world implementation of PUT), but because of the general tone those conversations always end up taking, and this one is no different from the various ones that have been had before, all of them involving discussions around changing existing standards or webarch and none of them leading, to my knowledge, to any concrete proposed changes to those standards through the appropriate mediums. Either way, I'd suggest bringing your request for changes in the semantics of PUT on the HTTP-bis mailing list, which is the correct forum to discuss changes to the existing standard. For those not aware of it, see http://wiki.tools.ietf.org/wg/httpbis/trac/wiki#Participate Now I'll stop replying and go back to building things. Cheerios. Seb ________________________________________ From: Mike Kelly [mike@...] Sent: 16 December 2011 12:01 To: Sebastien Lambla Cc: Jakob Strauch; rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: Rails 3.2 and PATCH On Fri, Dec 16, 2011 at 9:30 AM, Sebastien Lambla <seb@...> wrote: > Mike, > > With all due respect I find most of my conversations with you to usually lead to no positive outcome, so I'd rather not enter yet another of those debates. Plus Jan is already on it and I have little to add for now. > That being the case, why did you bother announcing you were reluctant based on my use of "nonsense"? .. yet more nonsense. If you want 'positive outcomes', it's probably a good idea that you don't present derogatory opinion about other people's hard work as "fact". If you must do that and you get called on it then don't act surprised, and at least have the decency to either back it up or retract it. Jan was already 'on it' before, and that didn't seem to bother you too much - it looks a bit like you're just back-peddling. Either answer the questions addressed to you, or stop replying.
Jan Algermissen wrote: > > The thing really is that there is no problem to be fixed. > +1 Dumbing down PUT would leave no method in the protocol with discrete creation/replacement semantics. Meaning we'd have to invent another method with semantics identical to what PUT *used* to mean. From the thesis: "REST constrains messages between components to be self-descriptive in order to support intermediate processing of interactions." "HTTP is designed to extend the generic connector interface across a network connection." Hmmm, generic interface. Meaning we have universal semantics like 'create', 'update', and 'patch' which need mapping to protocol methods. Create and update may be considered to be the same by virtue of their idempotency, but patch in the generic sense is not idempotent so it must need another mapping; thus, PUT and PATCH have always been "in HTTP" (according to its creators/editors). "What makes HTTP significantly different from RPC is that the requests are directed to resources using a generic interface with standard semantics that can be interpreted by intermediaries almost as well as by the machines that originate services." Surely that can't happen when methods are overloaded with multiple discrete semantics, i.e. update != patch. How is it self-descriptive, meaning how is it visible to intermediaries, when PUT is being used as update vs. when PUT is being used as patch? Seems the very idea defeats the entire purpose of REST as a style, and HTTP as a protocol allowing implementation of systems following that style (vs. being guided by the mob rule of ignorant implementations). When the semantics of an interaction aren't standard and can't be modeled to be standard, POST is used. That isn't the case here, so to be RESTful, Rails should model the interaction correctly (by which I mean universally) using PATCH and constrain PUT to creation/replacement (as per HTTP), for the benefit of generic intermediaries. > > What is the problem of just using POST for the partial update in the > first place? This is what POST is for. > Or, if for some reason PATCH isn't used, what's the problem with minting a URI for the sub-resource being updated and using PUT? Since this is easily done, I see no basis for eliminating self-descriptiveness from HTTP by borking PUT. Media types can't change the semantics of methods, but they're free to define their patching semantics when used as a delta for an appropriate method like PATCH. That way, intermediaries know when that media type is being used as replace vs. when that media type is being used as a delta, by definition of the protocol (visibility). Since this is easily done, I see no basis for allowing media types to invisibly change the semantics of PUT. Each intermediary participating in such conversations would need to be application-aware, precluding the generic intermediaries REST and HTTP explicitly cater to. Which is the problem with pointing to Rails as the basis for changing HTTP; in the name of REST (!) it solves a problem which has no basis in reality, by violating one of the four (core) uniform interface constraints -- self-descriptive messaging. -Eric
On Fri, Dec 16, 2011 at 1:15 PM, Eric J. Bowman <eric@...> wrote: > Jan Algermissen wrote: >> >> The thing really is that there is no problem to be fixed. >> > > +1 > > Dumbing down PUT would leave no method in the protocol with discrete > creation/replacement semantics. Meaning we'd have to invent another > method with semantics identical to what PUT *used* to mean. From the > thesis: > > "REST constrains messages between components to be self-descriptive in > order to support intermediate processing of interactions." You've suggested that we would have to invent another method, so the conclusion I draw here is that you believe discrete creation/replacement semantics play an important self-descriptive role. Why do you believe that? If the objective of self-descriptiveness is the facilitation of intermediate processing, and you are claiming a web-wide requirement for creation/replace semantics, then by inference we should have examples of standard web infrastructure (i.e. intermediary mechanisms/protocols) that depend on this exact semantic.. what are they? Cheers, Mike
I've seen a couple of assertions that "patching [or partial update] isn't
idempotent" in this discussion, but I think it would be helpful to provide
an explicit example. (I think I believe the assertion, but an example
would help me, too). Why isn't it idempotent?
If a partial update is *not* idempotent, then there's something here,
because an intermediary cannot automatically retry such a partial update.
The Apache HttpClient (which, while not technically an intermediary, is
nonetheless application-agnostic HTTP plumbing) can be configured to
automatically retry idempotent requests on network errors, and this
currently includes PUTs, by definition of RFC2616. So there *are*
implementations that depend on these semantics, and I doubt HttpClient is
the only one. I wonder what Squid, Varnish, or Apache with mod_proxy do
with failed PUTs, if anything.
In short, if partial updates are not idempotent, you can't/shouldn't use
PUT to do them, because the RFC explicitly defines PUT to be an idempotent
operation, and you will be breaking any RFC2616-compliant current/future
implementations that rely on this fact, and I've just documented there's
at least one such implementation.
So can someone explain why partial updates aren't idempotent with an
example? I.e. Show such a partial update that results in different things
when applied once vs. twice. Or, perhaps more pertinently, why the type of
partial update Rails wants to do with PUT is not idempotent (I can think
of a few different meanings for "partial update").
Jon
........
Jon Moore
Comcast Interactive Media
From: Mike Kelly <mike@...>
Date: Fri, 16 Dec 2011 13:38:19 +0000
To: "Eric J. Bowman" <eric@...>
Cc: Jan Algermissen <jan.algermissen@...>, Sebastien Lambla
<seb@...>, Jakob Strauch <jakob.strauch@...>,
"rest-discuss@yahoogroups.com" <rest-discuss@yahoogroups.com>
Subject: Re: [rest-discuss] Re: Rails 3.2 and PATCH
On Fri, Dec 16, 2011 at 1:15 PM, Eric J. Bowman
<eric@... <mailto:eric%40bisonsystems.net>> wrote:
> Jan Algermissen wrote:
>>
>> The thing really is that there is no problem to be fixed.
>>
>
> +1
>
> Dumbing down PUT would leave no method in the protocol with discrete
> creation/replacement semantics. Meaning we'd have to invent another
> method with semantics identical to what PUT *used* to mean. From the
> thesis:
>
> "REST constrains messages between components to be self-descriptive in
> order to support intermediate processing of interactions."
You've suggested that we would have to invent another method, so the
conclusion I draw here is that you believe discrete
creation/replacement semantics play an important self-descriptive
role.
Why do you believe that?
If the objective of self-descriptiveness is the facilitation of
intermediate processing, and you are claiming a web-wide requirement
for creation/replace semantics, then by inference we should have
examples of standard web infrastructure (i.e. intermediary
mechanisms/protocols) that depend on this exact semantic.. what are
they?
Cheers,
Mike
Jonathan Moore wrote: > I've seen a couple of assertions that "patching [or partial update] > isn't idempotent" in this discussion, but I think it would be helpful > to provide an explicit example. (I think I believe the assertion, > but an example would help me, too). Why isn't it idempotent? +1. In the SQL world, the standard has a concept of "transaction isolation", but that concept is built on top of a series of "phenomena" (e.g. Dirty Read) which demonstrate the utility of the isolation levels. PUT and PATCH and POST need something similar (preferably in the HTTP spec) if this debate is to ever reach a conclusion. Bob
On Dec 16, 2011, at 3:08 PM, Moore, Jonathan (CIM) wrote: > So can someone explain why partial updates aren't idempotent with an > example? The issue is not that they can't be idempotent. Sure they can be. The issue is that the question whether they are idempotent depends entirely on the semantics of the media type used. And this means that we cannot standardize a method for partial updates with idempotent semantics. Because the nature of a method cannot depend on the nature of the media type used. Idempotent partial update: PATCH /customer/23 Content-Type: application/customer-update <customer> <name>John Doe</name> <!-- assuming this means 'set the name to John Doe' --> </customer> This will be the same regardless of how often the request arrives and in what order. Non-Idempotent example: PATCH /engine/3 Content-Type: application/engine-control <increment-speed value="10" unit="mph"/> If you'd have said (with the same partial-update intent) PUT /engine/3 Content-Type: application/engine-control <increment-speed value="10" unit="mph"/> and had an intermediary retrying this request 100 times .... good luck :-) There is no way for the intermediary to know whether the update is idempotent or not and hence PUT or PATCH cannot be defined to be idempotent. ... that is, unless you say that PUT *replaces* resource state. Jan
On Dec 16, 2011, at 4:08 PM, Robert Brewer wrote: > In the SQL world, the standard has a concept of "transaction > isolation", but that concept is built on top of a series of "phenomena" > (e.g. Dirty Read) which demonstrate the utility of the isolation levels. > PUT and PATCH and POST need something similar (preferably in the HTTP > spec) if this debate is to ever reach a conclusion. <tongue-in-cheek>Hey, the HTTPbis effort is nearing completion[1] so if you want sneak in a Transaction header[2] at last minute now's the time</tongue-in-cheek> :-) Jan [1] Kudos to that hard-working group, BTW. It is admirable how much they clarified and improved and makes HTTPbis <http://tools.ietf.org/wg/httpbis/> a recommended read for the holidays! [2] http://tech.groups.yahoo.com/group/rest-discuss/message/4150
On Fri, 16 Dec 2011 19:03:38, Jan Algermissen <jan.algermissen@...>
wrote:
> The issue is not that they can't be idempotent. Sure they can be. The
>issue is that > the question whether they are idempotent depends entirely
>on the semantics of the
> media type used.
Ok, agreed. But if, as an application, I only used PUT for partial updates
that
*were* idempotent, where's the problem? For example, if the *particular*
partial updates Rails would like to do with PUT are the idempotent type,
like your second example ("set name to John"), what breaks?
> And this means that we cannot standardize a method for partial updates
>with
> idempotent semantics. Because the nature of a method cannot depend on
>the nature
> the media type used.
I'll agree that we don't want the method definition to depend on a media
type. PUT is defined as an idempotent update or creation. There's no
requirement on what the server does with it, other than to "store" it. I
would argue that this definition includes an *idempotent* partial update.
> Idempotent partial update:
> PATCH /customer/23
> Content-Type: application/customer-update
>
> <customer>
> <name>John Doe</name> <!-- assuming this means 'set the name to John
>Doe' -->
> </customer>
I'd argue *this* request could be a PUT because it is idempotent.
> Non-Idempotent example:
>
> PATCH /engine/3
> Content-Type: application/engine-control
>
> <increment-speed value="10" unit="mph"/>
I agree this one should *not* be a PUT, because it is not idempotent.
> There is no way for the intermediary to know whether the update is
>idempotent or not > and hence PUT or PATCH cannot be defined to be
>idempotent.
> ... that is, unless you say that PUT *replaces* resource state.
Right, so I'm saying if your partial update is idempotent, you can use a
PUT, because that means "idempotent" to an intermediary. If your partial
update is not idempotent, then use PATCH or POST (thereby signaling an
intermediary *not* to retry it).
So which type of partial update is Rails trying to provide? If the answer
is "both", then I agree PUT is not the right choice. If the answer is
"only idempotent ones", then I think it's not so clear.
Jon
........
Jon Moore
Comcast Interactive Media
On Dec 16, 2011, at 7:51 PM, Moore, Jonathan (CIM) wrote: > But if, as an application, I only used PUT for partial updates > that > *were* idempotent, where's the problem? The problem is that Gandalf's intermediary does not know anything 'bout *your* application. Jan
On Dec 16, 2011, at 7:51 PM, Moore, Jonathan (CIM) wrote: > I > would argue that this definition includes an *idempotent* partial update. Could you point to the part of the spec that leads you to this assumption? Jan
The only significant property of the request from its point of view is that it is idempotent, which it is. What's the problem? Cheers, Mike On Fri, Dec 16, 2011 at 7:44 PM, Jan Algermissen <jan.algermissen@...> wrote: > > On Dec 16, 2011, at 7:51 PM, Moore, Jonathan (CIM) wrote: > >> But if, as an application, I only used PUT for partial updates >> that >> *were* idempotent, where's the problem? > > The problem is that Gandalf's intermediary does not know anything 'bout *your* application. > > Jan >
On Dec 16, 2011, at 8:59 PM, Mike Kelly wrote: > The only significant property of the request from its point of view is > that it is idempotent, which it is. What's the problem? Now you are reversing the whole thing. The problem with this is that the server will treat it as a complete update because you use PUT. Jan > > Cheers, > Mike > > > On Fri, Dec 16, 2011 at 7:44 PM, Jan Algermissen > <jan.algermissen@...> wrote: >> >> On Dec 16, 2011, at 7:51 PM, Moore, Jonathan (CIM) wrote: >> >>> But if, as an application, I only used PUT for partial updates >>> that >>> *were* idempotent, where's the problem? >> >> The problem is that Gandalf's intermediary does not know anything 'bout *your* application. >> >> Jan >>
On 12/16/11 2:59 PM, "Jan Algermissen" <jan.algermissen@...> wrote: > >On Dec 16, 2011, at 7:51 PM, Moore, Jonathan (CIM) wrote: > >> I >> would argue that this definition includes an *idempotent* partial >>update. > >Could you point to the part of the spec that leads you to this assumption? > >Jan > Not exactly, because the spec does not specifically rule it out. I could ask the inverse question: where does the spec require that the update be non-partial? 9.6. PUT ...If the Request-URI refers to an already existing resource, the enclosed entity SHOULD be considered as a modified version of the one residing on the origin server. Why is "whatever is already there, with the following (idempotent) modification" not an acceptable description of a modified version? Jon ........ Jon Moore Comcast Interactive Media >
Aren't we talking about my own hypothetical application where the server is expecting partials? On Fri, Dec 16, 2011 at 8:03 PM, Jan Algermissen <jan.algermissen@...> wrote: > > On Dec 16, 2011, at 8:59 PM, Mike Kelly wrote: > >> The only significant property of the request from its point of view is >> that it is idempotent, which it is. What's the problem? > > Now you are reversing the whole thing. > > The problem with this is that the server will treat it as a complete update because you use PUT. > > Jan > > > >> >> Cheers, >> Mike >> >> >> On Fri, Dec 16, 2011 at 7:44 PM, Jan Algermissen >> <jan.algermissen@...> wrote: >>> >>> On Dec 16, 2011, at 7:51 PM, Moore, Jonathan (CIM) wrote: >>> >>>> But if, as an application, I only used PUT for partial updates >>>> that >>>> *were* idempotent, where's the problem? >>> >>> The problem is that Gandalf's intermediary does not know anything 'bout *your* application. >>> >>> Jan >>> >
On 12/16/11 3:03 PM, "Jan Algermissen" <jan.algermissen@...> wrote: >On Dec 16, 2011, at 8:59 PM, Mike Kelly wrote: > >> The only significant property of the request from its point of view is >> that it is idempotent, which it is. What's the problem? > >Now you are reversing the whole thing. > >The problem with this is that the server will treat it as a complete >update because you use PUT. Why can you assume this is how a server will treat it? From 9.6: "HTTP/1.1 does not define how a PUT method affects the state of an origin server." Jon ........ Jon Moore Comcast Interactive Media
On Dec 16, 2011, at 9:11 PM, Moore, Jonathan (CIM) wrote: > On 12/16/11 2:59 PM, "Jan Algermissen" <jan.algermissen@...> wrote: > >> >> On Dec 16, 2011, at 7:51 PM, Moore, Jonathan (CIM) wrote: >> >>> I >>> would argue that this definition includes an *idempotent* partial >>> update. >> >> Could you point to the part of the spec that leads you to this assumption? >> >> Jan >> > > Not exactly, because the spec does not specifically rule it out. The first sentence in 9.6. which you for whatever reason did not quote: "The PUT method requests that the enclosed entity be stored under the supplied Request-URI" These days, it is always helpful to check the forthcoming version of the spec: http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-17#section-6.6 (see last two paragraphs of 6.6) Jan > I could > ask the inverse question: where does the spec require that the update be > non-partial? > > 9.6. PUT > ...If the Request-URI refers to an already existing resource, the enclosed > entity SHOULD be considered as a modified version of the one residing on > the origin server. > > Why is "whatever is already there, with the following (idempotent) > modification" not an acceptable description of a modified version? > > Jon > ........ > Jon Moore > Comcast Interactive Media > > > >> >
On Dec 16, 2011, at 9:12 PM, Mike Kelly wrote: > Aren't we talking about my own hypothetical application where the > server is expecting partials? Now, you blew my head off, Mike. What you are now talking about is leveraging out of band knowledge and I am pretty sure *you* know that we are talking about RPC then, not REST. Jan > > On Fri, Dec 16, 2011 at 8:03 PM, Jan Algermissen > <jan.algermissen@...> wrote: > > > > On Dec 16, 2011, at 8:59 PM, Mike Kelly wrote: > > > >> The only significant property of the request from its point of view is > >> that it is idempotent, which it is. What's the problem? > > > > Now you are reversing the whole thing. > > > > The problem with this is that the server will treat it as a complete update because you use PUT. > > > > Jan > > > > > > > >> > >> Cheers, > >> Mike > >> > >> > >> On Fri, Dec 16, 2011 at 7:44 PM, Jan Algermissen > >> <jan.algermissen@...> wrote: > >>> > >>> On Dec 16, 2011, at 7:51 PM, Moore, Jonathan (CIM) wrote: > >>> > >>>> But if, as an application, I only used PUT for partial updates > >>>> that > >>>> *were* idempotent, where's the problem? > >>> > >>> The problem is that Gandalf's intermediary does not know anything 'bout *your* application. > >>> > >>> Jan > >>> > > >
On Dec 16, 2011, at 9:13 PM, Moore, Jonathan (CIM) wrote: > On 12/16/11 3:03 PM, "Jan Algermissen" <jan.algermissen@...> wrote: > >> On Dec 16, 2011, at 8:59 PM, Mike Kelly wrote: >> >>> The only significant property of the request from its point of view is >>> that it is idempotent, which it is. What's the problem? >> >> Now you are reversing the whole thing. >> >> The problem with this is that the server will treat it as a complete >> update because you use PUT. > > Why can you assume this is how a server will treat it? From 9.6: > "HTTP/1.1 does not define how a PUT method affects the state of an origin > server." This refers to the fact that the server can deal with the request any way it likes (e.g. enhance the stored representation). That is an implementation issue and deliberately hidden by the interface. (Separating interface and implementation) The meaning of PUT from the protocol POV (and this is what HTTP standardizes) is *replace* and not *partial update* (as HTTPbis happens to explicitly clarify). jan > > > Jon > ........ > Jon Moore > Comcast Interactive Media >
On 2011-12-16 21:11, Moore, Jonathan (CIM) wrote: > On 12/16/11 2:59 PM, "Jan Algermissen" <jan.algermissen@... > <mailto:jan.algermissen%40nordsc.com>> wrote: > > > > >On Dec 16, 2011, at 7:51 PM, Moore, Jonathan (CIM) wrote: > > > >> I > >> would argue that this definition includes an *idempotent* partial > >>update. > > > >Could you point to the part of the spec that leads you to this assumption? > > > >Jan > > > > Not exactly, because the spec does not specifically rule it out. I could > ask the inverse question: where does the spec require that the update be > non-partial? > > 9.6. PUT > ...If the Request-URI refers to an already existing resource, the enclosed > entity SHOULD be considered as a modified version of the one residing on > the origin server. > > Why is "whatever is already there, with the following (idempotent) > modification" not an acceptable description of a modified version? > ... Because the current state is not part of the "enclosed entity". Anyway. It's obvious that people have been confused by this; for instance, we had endless debates about this when discussing AtomPub. Back ~ 1 year ago, I opened the following HTTPbis issue: "There's a permathread about PUT-for-partial-update. We should clarify that PUT can be more than "store this payload verbatim", but that the final state should not depend on the previous state of the resource (except when by design; such when doing versioning...)" <http://trac.tools.ietf.org/wg/httpbis/trac/ticket/267>. Roy has addressed this 9 months ago by adding: "Partial content updates are possible by targeting a separately identified resource with state that overlaps a portion of the larger resource, or by using a different method that has been specifically defined for partial updates (for example, the PATCH method defined in [RFC5789])." (<http://trac.tools.ietf.org/wg/httpbis/trac/changeset/1158>) Hope this helps, Julian
It seems like the only sensible assertion anyone (i.e intermediaries) on the web can make about a PUT request is that it is idempotent. Would it not be easier to just condense the definition to 'an idempotent update' and leave it at that? Why does HTTP need all this application-ish baggage - what does it achieve? Thanks, Mike On Fri, Dec 16, 2011 at 8:26 PM, Julian Reschke <julian.reschke@...> wrote: > On 2011-12-16 21:11, Moore, Jonathan (CIM) wrote: >> >> On 12/16/11 2:59 PM, "Jan Algermissen" <jan.algermissen@nordsc.com >> <mailto:jan.algermissen%40nordsc.com>> wrote: >> >> > >> >On Dec 16, 2011, at 7:51 PM, Moore, Jonathan (CIM) wrote: >> > >> >> I >> >> would argue that this definition includes an *idempotent* partial >> >>update. >> > >> >Could you point to the part of the spec that leads you to this >> assumption? >> > >> >Jan >> > >> >> Not exactly, because the spec does not specifically rule it out. I could >> ask the inverse question: where does the spec require that the update be >> non-partial? >> >> 9.6. PUT >> ...If the Request-URI refers to an already existing resource, the enclosed >> entity SHOULD be considered as a modified version of the one residing on >> the origin server. >> >> Why is "whatever is already there, with the following (idempotent) >> modification" not an acceptable description of a modified version? >> ... > > > Because the current state is not part of the "enclosed entity". > > Anyway. > > It's obvious that people have been confused by this; for instance, we had > endless debates about this when discussing AtomPub. > > Back ~ 1 year ago, I opened the following HTTPbis issue: > > "There's a permathread about PUT-for-partial-update. > > We should clarify that PUT can be more than "store this payload verbatim", > but that the final state should not depend on the previous state of the > resource (except when by design; such when doing versioning...)" > > <http://trac.tools.ietf.org/wg/httpbis/trac/ticket/267>. > > Roy has addressed this 9 months ago by adding: > > > "Partial content updates are possible by targeting a separately identified > resource with state that overlaps a portion of the larger resource, or by > using a different method that has been specifically defined for partial > updates (for example, the PATCH method defined in [RFC5789])." > > (<http://trac.tools.ietf.org/wg/httpbis/trac/changeset/1158>) > > Hope this helps, > > Julian
On Dec 16, 2011, at 9:34 PM, Mike Kelly wrote: > Why does HTTP need all this application-ish baggage Since you ask - what 'baggage' specifically do you mean? Jan
On 2011-12-16 21:34, Mike Kelly wrote: > It seems like the only sensible assertion anyone (i.e intermediaries) > on the web can make about a PUT request is that it is idempotent. > > Would it not be easier to just condense the definition to 'an > idempotent update' and leave it at that? > Why does HTTP need all this application-ish baggage - what does it achieve? > ... "The Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information systems."
No intermediary mechanisms (real world or theoretical) rely on PUT requests being full-replace across the whole web, making that part of the protocol instead of allowing applications to determine this according to their requirements is not only redundant semantic bloat; it's unnecessarily restrictive. On Fri, Dec 16, 2011 at 8:37 PM, Jan Algermissen <jan.algermissen@...> wrote: > > On Dec 16, 2011, at 9:34 PM, Mike Kelly wrote: > >> Why does HTTP need all this application-ish baggage > > Since you ask - what 'baggage' specifically do you mean? > > Jan >
Sorry, by application-ish I meant like AtomPub or whatever, not layer 7. I'll rephrase the question: What is the systemic benefit to the web of having PUT defined further than just 'a non-safe idempotent update'? Cheers, Mike On Fri, Dec 16, 2011 at 8:43 PM, Julian Reschke <julian.reschke@...> wrote: > On 2011-12-16 21:34, Mike Kelly wrote: >> >> It seems like the only sensible assertion anyone (i.e intermediaries) >> on the web can make about a PUT request is that it is idempotent. >> >> Would it not be easier to just condense the definition to 'an >> idempotent update' and leave it at that? >> Why does HTTP need all this application-ish baggage - what does it >> achieve? >> ... > > > "The Hypertext Transfer Protocol (HTTP) is an application-level protocol for > distributed, collaborative, hypermedia information systems." >
Jan Algermissen wrote: > The first sentence in 9.6. which you for whatever reason did not quote: > > "The PUT method requests that the enclosed entity be stored under the >supplied > Request-URI" I didn't quote it, because I don't think "stored under the supplied URI" is well-defined enough to rule out my interpretation. And, in fact, as you point out, httpbis spends a lot more text than RFC2616 does in describing what PUT means, from which I will draw the conclusion that PUT is either underspecified or underexplained in RFC2616, hence the confusion. > These days, it is always helpful to check the forthcoming version of the >spec: > > http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-17#section-6.6 > > (see last two paragraphs of 6.6) Fair enough, considering httpbis has as a charter "saying what we REALLY meant to say when we wrote RFC2616". ;) I agree partial updates are ruled out by the httpbis description of PUT. Jon
"Moore, Jonathan (CIM)" wrote: > > Right, so I'm saying if your partial update is idempotent, you can > use a PUT, because that means "idempotent" to an intermediary. If > your partial update is not idempotent, then use PATCH or POST > (thereby signaling an intermediary *not* to retry it). > PUT is idempotent because it's been mapped to replace, which is always expected to be idempotent. So it doesn't mean 'idempotent' to intermediaries per se, it means 'replace' which just so happens to be reliably idempotent. HTTP is based on transaction semantics, not transaction characteristics, so it is an error to choose a method based on characteristic matching rather than generic transaction semantics. It's still overloading PUT to make it also mean 'idempotent partial update' because it maps to a semantic, replace, not idempotency itself. Changing PUT to mean update for some resources and partial update for others results in a resource-specific interface, not a generic one. PATCH is mapped to 'partial update' semantics. Because this interaction cannot be interpreted to always, reliably be idempotent, even if some individual PATCH transactions are, the method is not treated as idempotent. REST is a generic interface, not an object-specific one, so it isn't concerned with optimizing for the 0.01% of traffic which amounts to idempotent partial-updates. REST is concerned with making the semantics of the transaction unambiguous, i.e. self-descriptive. -Eric
It just struck me that this isn't a REST question per se--it's an HTTP question, right? The system is no more or less RESTful based on whether PUT is idempotent or not or whether its semantics include partial updates or not. We're really discussing what HTTP (not REST) defines the semantics of PUT to mean. Would folks agree? Just wondering if we should move the discussion to HTTP-WG instead... Jon ........ Jon Moore On Dec 16, 2011, at 5:34 PM, "Eric J. Bowman" <eric@...> wrote: > "Moore, Jonathan (CIM)" wrote: >> >> Right, so I'm saying if your partial update is idempotent, you can >> use a PUT, because that means "idempotent" to an intermediary. If >> your partial update is not idempotent, then use PATCH or POST >> (thereby signaling an intermediary *not* to retry it). >> > > PUT is idempotent because it's been mapped to replace, which is always > expected to be idempotent. So it doesn't mean 'idempotent' to > intermediaries per se, it means 'replace' which just so happens to be > reliably idempotent. HTTP is based on transaction semantics, not > transaction characteristics, so it is an error to choose a method based > on characteristic matching rather than generic transaction semantics. > > It's still overloading PUT to make it also mean 'idempotent partial > update' because it maps to a semantic, replace, not idempotency itself. > Changing PUT to mean update for some resources and partial update for > others results in a resource-specific interface, not a generic one. > > PATCH is mapped to 'partial update' semantics. Because this interaction > cannot be interpreted to always, reliably be idempotent, even if some > individual PATCH transactions are, the method is not treated as > idempotent. REST is a generic interface, not an object-specific one, > so it isn't concerned with optimizing for the 0.01% of traffic which > amounts to idempotent partial-updates. REST is concerned with making > the semantics of the transaction unambiguous, i.e. self-descriptive. > > -Eric
Mike Kelly wrote: > > You've suggested that we would have to invent another method, so the > conclusion I draw here is that you believe discrete > creation/replacement semantics play an important self-descriptive > role. > Within 23 minutes of my message being sent, you've already replied. Since you obviously barely took the time to read it, let alone contemplate it, I wouldn't trust your conclusions about what I wrote. You seem to be more interested in flaming people who disagree with you through the use of strawman arguments (rather than reference to Roy's thesis or other subject material to support your arguments), or redefining everything to suit your ideas, than you are in learning REST. > > Why do you believe that? > Because I believe REST is based on the principle of generality. There exist generic client-server interactions, which may be found to overlap in any number of Internet protocols: create, update, replace, delete, retrieve, patch, copy, move, lock, unlock and more. Self-descriptive messaging is based on unambiguously defining the semantics of the transaction via standard methods, not mapping methods to characteristics shared between subsets of these generic transaction semantics -- what's standardized about that?!? If I just know a method maps to 'idempotency', how do I know if it's a partial or a full update? On the principle of generality, what existing protocols are you basing the definition of methods as transaction characteristics? HTTP re-uses generic methods common (if named differently) across myriad protocols -- hence the ability to interface with other protocols, which is lost if the definition of PUT is based on the characteristic of idempotency and no longer resembles the creation/update semantics assigned to PUT (or similar) by other protocols. > > If the objective of self-descriptiveness is the facilitation of > intermediate processing, and you are claiming a web-wide requirement > for creation/replace semantics, then by inference we should have > examples of standard web infrastructure (i.e. intermediary > mechanisms/protocols) that depend on this exact semantic.. what are > they? > I could never have predicted half of the intermediary optimizations applied to GET, why would I attempt to do so for PUT which accounts for an infinitesimal amount of traffic by comparison? Just because there's no motivation to optimize PUT isn't reason to change its meaning due to the lack. Instead of strawman arguments about objectives, I suggest you re-read Roy's thesis, for the rationale behind self-descriptiveness. You conveniently leave out compatibility with other protocols, when suggesting the entire paradigm of HTTP be changed such that methods are no longer mapped to transaction semantics, but to shared characteristics of these transactions -- so I believe the burden is on you to falsify Roy's thesis, not me to account for intermediary optimizations of PUT which nobody has thought of, or more likely, that I'm simply not aware of (like with GET). -Eric
"Moore, Jonathan (CIM)" wrote: > > We're really discussing what HTTP (not REST) defines the semantics of > PUT to mean. > Strongly disagree due to principal of generality. You're proposing a change to HTTP which has no basis in other protocols; REST tells us why HTTP's methods so strongly resemble the methods found in protocols HTTP is based on and interoperates with. GET and PUT in HTTP identify the same generic transaction semantics identified by GET and PUT in FTP. -Eric
"Eric J. Bowman" wrote: > > Strongly disagree due to principal of generality. > Principle, that is... -Eric
"Moore, Jonathan (CIM)" wrote: > > The system is no more or less RESTful based on whether PUT is > idempotent or not or whether its semantics include partial updates or > not. > It's much less RESTful if messaging is no longer self-descriptive of sender intent. The sender's intent is not "idempotency", it's either full or partial update -- not both, so a method can't mean both. -Eric
On Fri, Dec 16, 2011 at 11:22 PM, Eric J. Bowman <eric@...> wrote: > "Moore, Jonathan (CIM)" wrote: >> >> The system is no more or less RESTful based on whether PUT is >> idempotent or not or whether its semantics include partial updates or >> not. >> > > It's much less RESTful if messaging is no longer self-descriptive of > sender intent. The sender's intent is not "idempotency", it's either > full or partial update -- not both, so a method can't mean both. > The method itself can't infer both, but it can infer neither. If partials are permitted, a full update is equivalent to a partial with no omission. If preventing partials is a requirement for a given application, then that application can define its own semantics specifying the transition rules. The application in question provides the necessary shared understanding for the client and server to understand the sender intent. Granted, this leaves out intermediaries. Is it necessary for intermediaries to be able to distinguish between whether or not a PUT is full or partial? Cheers, Mike
Eric J. Bowman wrote: > It's much less RESTful if messaging is no longer self-descriptive of > sender intent. I agree with you here. PUT needs to have clear, standard semantics or the message is no longer self-descriptive. > The sender's intent is not "idempotency", it's either > full or partial update -- not both, so a method can't mean both. I disagree. POST can mean these, for example, and lots of other things too. Roy has written that GET and POST are sufficient methods for a RESTful system (cf. the browser-based HTML web)[1]. What's wrong with a hypothetical method whose semantics are "an idempotent partial update" (where a partial update that happens to describe everything can be used for a full update)? We've established that PUT (as defined in httpbis) is not that method. REST just requires that the interface be uniform--it doesn't actually say which methods need to be in there. If I defined a protocol that was identical to HTTP but with another method called IDEMPOTENTFULLORPARTIALUPDATE with those semantics, it wouldn't be less RESTful than HTTP. I think the crux of the argument here is more like (with apologies to the Princess Bride): "I do not think PUT means what you think it means." In fact, to quote Roy's blog (citation below): "Specific method definitions (aside from the retrieval:resource duality of GET) simply don’t matter to the REST architectural style, so it is difficult to have a style discussion about them. The only thing REST requires of methods is that they be uniformly defined for all resources (i.e., so that intermediaries don’t have to know the resource type in order to understand the meaning of the request). As long as the method is being used according to its own definition, REST doesn’t have much to say about it." Jon [1] http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post
Eric J. Bowman wrote: > "Moore, Jonathan (CIM)" wrote: > > > > We're really discussing what HTTP (not REST) defines the semantics of > > PUT to mean. > > Strongly disagree due to principal of generality. You're proposing a > change to HTTP which has no basis in other protocols; REST tells us why > HTTP's methods so strongly resemble the methods found in protocols HTTP > is based on and interoperates with. GET and PUT in HTTP identify the > same generic transaction semantics identified by GET and PUT in FTP. Just to be clear: I'm not proposing this change. I don't even really know Ruby. :) I'm not advocating changing the HTTP spec, just trying to understand what it means, and whether the proposed Rails usage is compatible with that or not. I've already agreed that PUT (as defined by httpbis) does not include partial updates, although RFC2616 doesn't make this sufficiently clear (as evidenced by the need to elaborate it much more in httpbis), and it's the only published standard. So I am just someone interested in understanding the issue and what's going on. I come down on the side of Rails probably ought to be HTTP-compliant, and that we shouldn't change the HTTP spec just for Rails. I just didn't know if partial updates were HTTP-compliant or not (until this thread). That said, REST just requires that HTTP have a uniform definition for its methods, not that it has any particular methods (other than GET). So I maintain that this is an HTTP discussion about what PUT means, and that the outcome of that discussion doesn't change the fact that the web architecture of HTTP is RESTful. See Roy on this (as I mentioned just now on another branch of this thread): http://roy.gbiv.com/untangled/2009/it-is-okay-to-use-post Jon
On Fri, Dec 16, 2011 at 11:22 PM, Eric J. Bowman <eric@...> wrote: > > The sender's intent is not "idempotency" > afaict, that seems to fit in ok: GET: "apply this safe idempotent request" DELETE: "apply this non-safe idempotent delete request" PUT: "apply this non-safe idempotent update request" POST: "apply this non-safe non-idempotent request" Cheers, Mike
I'm not sure I understand what you mean. Could you please elaborate a bit on this? Thanks, Markus > -----Original Message----- > From: Julian Reschke [mailto:julian.reschke@...] > Sent: Friday, December 16, 2011 5:16 PM > To: Markus Lanthaler > Cc: rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Rails 3.2 and PATCH > > On 2011-12-16 04:21, Markus Lanthaler wrote: > > I thought PATCH is already standardized!? > > See http://tools.ietf.org/html/rfc5789 > > Yes. But the IETF has different standardization levels. > > > ... > > Best regards, Julian
On 2011-12-17 12:55, Markus Lanthaler wrote: > I'm not sure I understand what you mean. Could you please elaborate a bit on > this? > ... Not every RFC is on the IETF Standards Track, and RFCs which are on the Standards Track can be at different maturity levels (for a long time 3, now 2). Some people argued that PATCH is somehow "less" standardized than PUT because it's currently a "Proposed" standard only. Of course that's BS; most of the internet runs on proposed standards. Best regards, Julian
On Dec 16, 2011, at 9:54 PM, Mike Kelly wrote: > What is the systemic benefit to the web of having PUT defined further > than just 'a non-safe idempotent update'? To me, HTTP defines PUT just like that. The rest of the spec for PUT is explanatory, e.g. regarding cache behavior or clarifying that PUT on an non-existing resource is a create (and hence should yield a 201 response). So again, what specifically does feel like 'baggage' to you in HTTP1.1 section 9.6? Jan
On Sat, Dec 17, 2011 at 2:50 PM, Jan Algermissen <jan.algermissen@...> wrote: > > On Dec 16, 2011, at 9:54 PM, Mike Kelly wrote: > >> What is the systemic benefit to the web of having PUT defined further >> than just 'a non-safe idempotent update'? > > To me, HTTP defines PUT just like that. The rest of the spec for PUT is explanatory, e.g. regarding cache behavior or clarifying that PUT on an non-existing resource is a create (and hence should yield a 201 response). > > So again, what specifically does feel like 'baggage' to you in HTTP1.1 section 9.6? The part that goes further than that and, apparently, specifies the request should always be considered a full replacement. Cheers, Mike
"Moore, Jonathan (CIM)" wrote: > > I just didn't know if partial updates were HTTP-compliant or not > (until this thread). > They are with PATCH, or by making them sub-resources and using PUT, I hope is your takeaway... > > > The sender's intent is not "idempotency", it's either > > full or partial update -- not both, so a method can't mean both. > > I disagree. POST can mean these, for example, and lots of other > things too. Roy has written that GET and POST are sufficient methods > for a RESTful system (cf. the browser-based HTML web)[1]. > Sure, REST applications may just use GET, and for interactivity don't even need to use POST let alone HTTP -- a form's target may be a mailto: URI. But that doesn't mean a RESTful protocol, like HTTP, should have as few methods as possible, or not be based on interaction semanctics. REST doesn't care what those methods are, but does care that protocols have methods. HTTP's method choice is (quite RESTfully) derived by the principle of generality, those standardized methods result in a generic interface because they are so derived. That GET and POST are sufficient for a RESTful system doesn't mean a system isn't less RESTful which uses POST instead of a more appropriate method, see below. > > What's wrong with a hypothetical method whose semantics are "an > idempotent partial update" (where a partial update that happens to > describe everything can be used for a full update)? We've established > that PUT (as defined in httpbis) is not that method. > Because now you'd have two methods describing the same sender intent of partial update. That some partial-update interactions may be idempotent is interesting, but it's an edge case, so I don't understand the need to optimize for it. I believe that sender intent stops at partial update, without considering idempotency, and I see no advantage in adding complexity to the protocol to take that into account -- especially not if there's already a RESTful solution to the problem... More important is the question of why not, if idempotency is important in a partial update scenario, do what Roy and most everyone else always says by making it a subresource with its own URL and using PUT? Since this design pattern is such a standard part of REST development, I'm not seeing any problem which needs solving with a new method, any more than I saw a problem which needed solving by redefining an existing method. > > I think the crux of the argument here is more like (with apologies to > the Princess Bride): "I do not think PUT means what you think it > means." > I don't think Roy's blog means what you think it means. ;-) > > "Specific method definitions (aside from the retrieval:resource > duality of GET) simply don’t matter to the REST architectural style, > so it is difficult to have a style discussion about them. The only > thing REST requires of methods is that they be uniformly defined for > all resources (i.e., so that intermediaries don’t have to know the > resource type in order to understand the meaning of the request). As > long as the method is being used according to its own definition, > REST doesn’t have much to say about it." > I've never been guilty of the paper tiger of "don't use POST" Roy admonishes against; what I have said is don't use POST to do things we already have discrete methods for (refactor to use as many methods as we do have to choose from, i.e. solve the partial-update problem by making it a replacement of a subresource if PATCH isn't your bag). From the same post: "POST only becomes an issue when it is used in a situation for which some other method is ideally suited: e.g., retrieval of information that should be a representation of some resource (GET), complete replacement of a representation (PUT), or any of the other standardized methods that tell intermediaries something more valuable than 'this may change something.' The other methods are more valuable to intermediaries because they say something about how failures can be automatically handled and how intermediate caches can optimize their behavior." Roy's saying it's also an error to use POST to PATCH. It is less valuable to intermediaries to use POST when a standardized method exists which describes the semantics of the interaction, e.g. PATCH. I don't know how it is that I'm confused about PUT when I say it has replacement semantics; is Roy also confused? If it's important to intermediaries that your partial update be considered idempotent, then mint a URI for a subresource and use PUT. I should think that approach needs to be falsified before lobbying for a new method. -Eric
On Sat, Dec 17, 2011 at 4:31 PM, Eric J. Bowman <eric@...> wrote: > "Moore, Jonathan (CIM)" wrote: >> >> I just didn't know if partial updates were HTTP-compliant or not >> (until this thread). >> > > They are with PATCH, or by making them sub-resources and using PUT, I > hope is your takeaway... If creating sub-resources was a sufficient solution, why was PATCH created? > >> >> What's wrong with a hypothetical method whose semantics are "an >> idempotent partial update" (where a partial update that happens to >> describe everything can be used for a full update)? We've established >> that PUT (as defined in httpbis) is not that method. >> > > Because now you'd have two methods describing the same sender intent of > partial update. That some partial-update interactions may be idempotent > is interesting, but it's an edge case, so I don't understand the need to > optimize for it. I don't think it's as edge as you think it is. Mobile clients are becoming more and more prevalent; they operate on a relatively inefficient network where requests need to be as lean as possible (partial) and easily re-issued(idempotent). All that is required to make this possible if for PUT semantics to be more general - i.e. drop the full replace semantic, and make no assertion about the partial/fullness one way or the other. > I believe that sender intent stops at partial update, > without considering idempotency, and I see no advantage in adding > complexity to the protocol to take that into account -- especially not > if there's already a RESTful solution to the problem... Removing the constraint which specifies 'PUT requests must be full replace' is less complex, not more. > More important is the question of why not, if idempotency is important > in a partial update scenario, do what Roy and most everyone else always > says by making it a subresource with its own URL and using PUT? Since > this design pattern is such a standard part of REST development, I'm > not seeing any problem which needs solving with a new method, any more > than I saw a problem which needed solving by redefining an existing > method. *snip* > > If it's important to intermediaries that your partial update be > considered idempotent, then mint a URI for a subresource and use PUT. > I should think that approach needs to be falsified before lobbying for > a new method. The problem with this advice is that it's not a practical solution: the requirements for what constitutes efficient granularity can differ between clients, can change over time, and are very difficult to get right up front. Also, doing so reduces the visibility of interaction in terms of smearing shared state across several resources, this makes mechanisms like cache invalidation far more difficult to leverage - it's redundant and costly. Cheers, Mike
--- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote:
>
> I created a question on Stack Overflow about this a while ago:
>
> http://stackoverflow.com/questions/2364110/whats-the-justification-behind-disallowing-partial-put
>
> I still don't really understand the benefit of not allowing PUT to be
> partial
If you allow that, how would a server be able to determine the nature of the update (partial or complete) and perform its job?
For example, suppose that PUT was allowed to be partial:
----------------
GET /mydog
200 OK
<dog>
<status>happy</status>
<favoriteToy>A red muppet</favoriteToy>
</dog>
----------------
Then:
----------------
PUT /mydog
<dog>
<status>sleepy</status>
</dog>
----------------
Am I doing a partial update (just changing my dog's status), or a complete one (changing his status and removing an optional "favoriteToy" information) ?
Philippe
The other diff with PUT is the client names the resource. With POST the server assigns the uri and returns a location header telling the client where it was creates. So in the 201 case the client has chosen the uri where the resource will live. Sent from my Windows Phone ------------------------------ From: Jan Algermissen Sent: 12/17/2011 9:50 AM To: Mike Kelly Cc: Julian Reschke; Moore, Jonathan (CIM); Eric J. Bowman; Sebastien Lambla; Jakob Strauch; rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Re: Rails 3.2 and PATCH On Dec 16, 2011, at 9:54 PM, Mike Kelly wrote: > What is the systemic benefit to the web of having PUT defined further > than just 'a non-safe idempotent update'? To me, HTTP defines PUT just like that. The rest of the spec for PUT is explanatory, e.g. regarding cache behavior or clarifying that PUT on an non-existing resource is a create (and hence should yield a 201 response). So again, what specifically does feel like 'baggage' to you in HTTP1.1 section 9.6? Jan
Your application can specify whether or not the request is intended as a complete update or not. If the application does specify it as complete then omission is removal, if it doesn't then the deletable sections would have to be given their own URI to receive a DELETE. I find it's much easier to establish deletable sections than updatable partials up front, as deletes are generally determined by the application needs whereas partial updates are determined by client needs (which are often unknowable when designing the application). Cheers, Mike On Sat, Dec 17, 2011 at 5:15 PM, Philippe Mougin <pmougin@...> wrote: > > > --- In rest-discuss@yahoogroups.com, Mike Kelly <mike@...> wrote: >> >> I created a question on Stack Overflow about this a while ago: >> >> http://stackoverflow.com/questions/2364110/whats-the-justification-behind-disallowing-partial-put >> >> I still don't really understand the benefit of not allowing PUT to be >> partial > > If you allow that, how would a server be able to determine the nature of the update (partial or complete) and perform its job? > > For example, suppose that PUT was allowed to be partial: > > ---------------- > GET /mydog > > 200 OK > > <dog> > <status>happy</status> > <favoriteToy>A red muppet</favoriteToy> > </dog> > ---------------- > > Then: > > ---------------- > PUT /mydog > > <dog> > <status>sleepy</status> > </dog> > ---------------- > > Am I doing a partial update (just changing my dog's status), or a complete one (changing his status and removing an optional "favoriteToy" information) ? > > Philippe > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Dec 17, 2011, at 6:59 PM, Mike Kelly wrote: > Your application can specify whether or not the request is intended as > a complete update or not. What happened to the uniform interface constraint? Jan
Nothing, the uniform interface of HTTP simply becomes less constrained with regard to PUT. PUT's semantics are simplified, and the change adds a useful capability that is lacking in HTTP's interface (i.e. partial idempotent updates). Reducing the semantics of PUT to be agnostic about fullness, retains visibility of request in terms of being idempotent, updating/instating, and non-safe. It drops visibility in terms of being able to determine partial/fullness of the request - and offloads this to shared understanding between client and server via standard media types, link relations or application documentation. They key here is that the drop in visibility does not appear to disrupt any existing or proposed web infrastructure, but it does enable a specific client/server interaction which would otherwise be impossible (idempotent partial updates). It would also reduce the complexity of PUT's semantics because it would remove the need to specify any of the additional semantics in place with regard to 'full replacement'. Cheers, Mike On Sat, Dec 17, 2011 at 6:30 PM, Jan Algermissen <jan.algermissen@...> wrote: > > On Dec 17, 2011, at 6:59 PM, Mike Kelly wrote: > >> Your application can specify whether or not the request is intended as >> a complete update or not. > > What happened to the uniform interface constraint? > > Jan >
On Sat, Dec 17, 2011 at 8:39 PM, Mike Kelly <mike@...> wrote: > ** > > Reducing the semantics of PUT to be agnostic about fullness, retains > visibility of request in terms of being idempotent, > updating/instating, and non-safe. It drops visibility in terms of > being able to determine partial/fullness of the request - and offloads > this to shared understanding between client and server via standard > media types, link relations or application documentation. > > Let's, for the sake of argument assume that idempotent updates are OK, and that PUT has been overloaded with this capability. How would an origin server know to use *replace* or *update* semantics? Would it be determined based on the Content-Type? e.g. text/plain, application/atom+xml are "full update" media types whereas, say, a hypothetical text/idempotent-patch is not? I would find it really odd if the following were true: "If you PUT a text/plain to me, I will store it" "if you PUT a text/idempotentpatch to me, I will apply the patch" It would also fall apart if I e.g. wanted to create a resource which had only one representation, that of said text/idempotentpatch media type. What would a PUT method mean? *Replace* the representation of the text/idempotentpatch, or *apply* it? What does a PUT with a partial update of a resource that does not exist mean? Store the "partial update" document at the URI? Or return 404 (since there's nothing to apply)? -- -mogsie-
We all know, many APIs and some frameworks* labelled with the term "REST", do not support hypermedia (or less so called RESTful) mechanisms. The term is (and likely will be) very overloaded in the IT industrie. I think, it is time to push forward terms like "hypermedia service/API" or "hypermedia-aware client". Instead of saying "hey i have a RESTful API", tell the people "i have a hypermedia API!". The term "hypermedia service" stands for** : - support of addressable resources - standard-conform usage of the uniform interface - (hopefully) a cleaner interface/interaction design - etc... IMHO, "hypermedia" is still a stepchild. It should be some kind of quality characteristic. What do you think? Jakob * I´m glad, the microsoft guys label their latest framework Web API (as opposed to the former WCF "REST" Starter Kit...) ** at least more than "REST"
On Sat, Dec 17, 2011 at 9:01 PM, Erik Mogensen <erik@...> wrote: > On Sat, Dec 17, 2011 at 8:39 PM, Mike Kelly <mike@...> wrote: >> >> Reducing the semantics of PUT to be agnostic about fullness, retains >> visibility of request in terms of being idempotent, >> updating/instating, and non-safe. It drops visibility in terms of >> being able to determine partial/fullness of the request - and offloads >> this to shared understanding between client and server via standard >> media types, link relations or application documentation. >> > Let's, for the sake of argument assume that idempotent updates are OK, and > that PUT has been overloaded with this capability. > > How would an origin server know to use replace or update semantics? > > Would it be determined based on the Content-Type? e.g. text/plain, > application/atom+xml are "full update" media types whereas, say, a > hypothetical text/idempotent-patch is not? > > I would find it really odd if the following were true: > "If you PUT a text/plain to me, I will store it" > "if you PUT a text/idempotentpatch to me, I will apply the patch" > > It would also fall apart if I e.g. wanted to create a resource which had > only one representation, that of said text/idempotentpatch media type. What > would a PUT method mean? Replace the representation of the > text/idempotentpatch, or apply it? > > What does a PUT with a partial update of a resource that does not exist > mean? Store the "partial update" document at the URI? Or return 404 (since > there's nothing to apply)? Those are all questions for whoever is designing and specifying the application/API in question, given resources will behave in the way that the application that governs them specifies. Why does HTTP need to get involved at that level? Cheers, Mike
On Dec 17, 2011, at 10:17 PM, Jakob Strauch wrote: > We all know, many APIs and some frameworks* labelled with the term "REST", do not support hypermedia (or less so called RESTful) mechanisms. The term is (and likely will be) very overloaded in the IT industrie. > > I think, it is time to push forward terms like "hypermedia service/API" or "hypermedia-aware client". Instead of saying "hey i have a RESTful API", tell the people "i have a hypermedia API!". > > The term "hypermedia service" stands for** : > > - support of addressable resources > - standard-conform usage of the uniform interface > - (hopefully) a cleaner interface/interaction design > - etc... > > IMHO, "hypermedia" is still a stepchild. It should be some kind of quality characteristic. What do you think? My thinking has been and still is that part of the problem is that there are no names for non-REST but HTTP-based APIs. So we end up having to name a non-REST API using the word REST and explaining what is *not* in there. But this causes REST to be burned into the readers mind...even if we talk about non REST APIs. I tried to fix that with this http://www.nordsc.com/ext/classification_of_http_based_apis.html a while ago. Based on that I can now say that some API is 'HTTP Type I' so the beast gets a proper name :-) JAn > > Jakob > > * I´m glad, the microsoft guys label their latest framework Web API (as opposed to the former WCF "REST" Starter Kit...) > ** at least more than "REST" > >
I have seen this and found it to be quite helpful. I was wondering whether you consider XHTML a generic or specific media type for the purposes of your categorization. On Dec 18, 2011, at 1:47 AM, Jan Algermissen <jan.algermissen@...> wrote: > > My thinking has been and still is that part of the problem is that there are no names for non-REST but HTTP-based APIs. So we end up having to name a non-REST API using the word REST and explaining what is *not* in there. But this causes REST to be burned into the readers mind...even if we talk about non REST APIs. > > I tried to fix that with this http://www.nordsc.com/ext/classification_of_http_based_apis.html a while ago. Based on that I can now say that some API is 'HTTP Type I' so the beast gets a proper name :-) > > JAn > > > >> >> Jakob >> >> * I´m glad, the microsoft guys label their latest framework Web API (as opposed to the former WCF "REST" Starter Kit...) >> ** at least more than "REST" >> >> > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Dec 18, 2011, at 6:49 PM, Jason Erickson wrote: > I have seen this and found it to be quite helpful. Thanks. > I was wondering whether you consider XHTML a generic or specific media type for the purposes of your categorization. XHTML is a specific media type. (It provides all the semantics to implement such very specific products known as browsers :-) However, beware that this does not mean that adding out-of-band constraints to XHTML (e.g. presence of certain div with certain classes) is 'ok' from a REST POV. Such out-of-band agreements still violate the message self descriptiveness constraint. What I think is a nice way to leverage XHTML is to mint your own media type that specifically documents your refinements. That way, you can serve one and the same entity as application/xhtml and application/vnd.my.new.mediatype, depending on the Accept header of the client. Jan > > On Dec 18, 2011, at 1:47 AM, Jan Algermissen <jan.algermissen@...> wrote: > > > > > My thinking has been and still is that part of the problem is that there are no names for non-REST but HTTP-based APIs. So we end up having to name a non-REST API using the word REST and explaining what is *not* in there. But this causes REST to be burned into the readers mind...even if we talk about non REST APIs. > > > > I tried to fix that with this http://www.nordsc.com/ext/classification_of_http_based_apis.html a while ago. Based on that I can now say that some API is 'HTTP Type I' so the beast gets a proper name :-) > > > > JAn > > > > > > > >> > >> Jakob > >> > >> * I´m glad, the microsoft guys label their latest framework Web API (as opposed to the former WCF "REST" Starter Kit...) > >> ** at least more than "REST" > >> > >> > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > >
<snip> > However, beware that this does not mean that adding out-of-band constraints to XHTML (e.g. presence of certain div with certain classes) is 'ok' from a REST POV. Such out-of-band agreements still violate the message self descriptiveness constraint. </snip> I often use XHTML as a media-type "base" and express problem domain specifics using @id, @name, @class, @rel. I document this usage in a manner similar to documenting custom media types[1] and publish this data as a "profile"[2] for [X]HTML. This allows developers (client and server) to "follow the media type" and produce messages that are "self-descriptive" regarding possible options within the context of each response representation. In this way, the "information becomes the affordance"[3]. From my POV, this approach follows Fielding's suggestions on crafting hypertext APIs[4] (see bullet #3). I've written a handful of 'bots (M2M solutions) using this approach, too. [1] http://amundsen.com/hypermedia/profiles/ [2] http://gmpg.org/xmdp/ [3] http://www.w3.org/wiki/AffordanceReferences [4] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Sun, Dec 18, 2011 at 13:04, Jan Algermissen <jan.algermissen@...> wrote: > > On Dec 18, 2011, at 6:49 PM, Jason Erickson wrote: > >> I have seen this and found it to be quite helpful. > > Thanks. > >> I was wondering whether you consider XHTML a generic or specific media type for the purposes of your categorization. > > XHTML is a specific media type. (It provides all the semantics to implement such very specific products known as browsers :-) > > However, beware that this does not mean that adding out-of-band constraints to XHTML (e.g. presence of certain div with certain classes) is 'ok' from a REST POV. Such out-of-band agreements still violate the message self descriptiveness constraint. > > What I think is a nice way to leverage XHTML is to mint your own media type that specifically documents your refinements. That way, you can serve one and the same entity as application/xhtml and application/vnd.my.new.mediatype, depending on the Accept header of the client. > > Jan > >> >> On Dec 18, 2011, at 1:47 AM, Jan Algermissen <jan.algermissen@...> wrote: >> >> > >> > My thinking has been and still is that part of the problem is that there are no names for non-REST but HTTP-based APIs. So we end up having to name a non-REST API using the word REST and explaining what is *not* in there. But this causes REST to be burned into the readers mind...even if we talk about non REST APIs. >> > >> > I tried to fix that with this http://www.nordsc.com/ext/classification_of_http_based_apis.html a while ago. Based on that I can now say that some API is 'HTTP Type I' so the beast gets a proper name :-) >> > >> > JAn >> > >> > >> > >> >> >> >> Jakob >> >> >> >> * I´m glad, the microsoft guys label their latest framework Web API (as opposed to the former WCF "REST" Starter Kit...) >> >> ** at least more than "REST" >> >> >> >> >> > >> > >> > >> > ------------------------------------ >> > >> > Yahoo! Groups Links >> > >> > >> > >> > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Dec 15, 2011, at 2:02 AM, Mike Kelly wrote: > On Thu, Dec 15, 2011 at 9:14 AM, Jan Algermissen > <jan.algermissen@...> wrote: >> >> On Dec 15, 2011, at 1:42 AM, Mike Kelly wrote: >> >>> I created a question on Stack Overflow about this a while ago: >>> >>> http://stackoverflow.com/questions/2364110/whats-the-justification-behind-disallowing-partial-put >>> >>> I still don't really understand the benefit of not allowing PUT to be >>> partial, >> >> So you are asking, why PUT was defined as idempotent in the first place, yes? >> >> I think the reason is sort of "because we can define it that way". There is POST, which has no visibility (==POST is meaningless to an intermediary) and everything could just be done with POST. But then, adding methods that *have* visibility adds some serious capabilities to HTTP. E.g. GET's semantics allow for caching and it is also very helpful that we know that GET is safe - we can call it any number of times. >> > > .. and PUTs 'complete replace' semantics allow for.. ? The ability to write patch representations onto a server so that those patches can later be retrieved by others. ....Roy
On Sun, Dec 18, 2011 at 7:04 PM, Jan Algermissen <jan.algermissen@... > wrote: > What I think is a nice way to leverage XHTML is to mint your own media > type that specifically documents your refinements. That way, you can serve > one and the same entity as application/xhtml and > application/vnd.my.new.mediatype, depending on the Accept header of the > client. > +1 :-) http://stackoverflow.com/questions/3403654/is-it-ok-to-tag-the-same-document-using-different-content-types-based-on-accept -- -mogsie-
On Mon, Dec 19, 2011 at 12:16 AM, Roy T. Fielding <fielding@...> wrote: > On Dec 15, 2011, at 2:02 AM, Mike Kelly wrote: > >> On Thu, Dec 15, 2011 at 9:14 AM, Jan Algermissen >> <jan.algermissen@...> wrote: >>> >>> On Dec 15, 2011, at 1:42 AM, Mike Kelly wrote: >>> >>>> I created a question on Stack Overflow about this a while ago: >>>> >>>> http://stackoverflow.com/questions/2364110/whats-the-justification-behind-disallowing-partial-put >>>> >>>> I still don't really understand the benefit of not allowing PUT to be >>>> partial, >>> >>> So you are asking, why PUT was defined as idempotent in the first place, yes? >>> >>> I think the reason is sort of "because we can define it that way". There is POST, which has no visibility (==POST is meaningless to an intermediary) and everything could just be done with POST. But then, adding methods that *have* visibility adds some serious capabilities to HTTP. E.g. GET's semantics allow for caching and it is also very helpful that we know that GET is safe - we can call it any number of times. >>> >> >> .. and PUTs 'complete replace' semantics allow for.. ? > > The ability to write patch representations onto a server so that those > patches can later be retrieved by others. > No, that's already made able by virtue of PUT being an 'update', right? I was asking specifically about httpbis' overspecified: 'PUT is an idempotent update of state that is a complete replacement' vs the more generally applicable and succinct: 'PUT is an idempotent update of state' Both of those allow for the ability to write patch representations onto a server so that those patches can later be retrieved by others. What does the former benefit the web that the latter does not? Afaict, the former actually prevents a set of uses for PUT (and therefore w/ HTTP as a whole) that would be possible under the latter. So it's not only less succinct, it's also restrictive too. Cheers, Mike
On Sat, Dec 17, 2011 at 10:17 PM, Jakob Strauch <jakob.strauch@...>wrote: > ** > > > We all know, many APIs and some frameworks* labelled with the term "REST", > do not support hypermedia (or less so called RESTful) mechanisms. The term > is (and likely will be) very overloaded in the IT industrie. > > I think, it is time to push forward terms like "hypermedia service/API" or > "hypermedia-aware client". Instead of saying "hey i have a RESTful API", > tell the people "i have a hypermedia API!". > +27.000 :) I thought I was the only one saying "hypermedia-aware clients". That's what I'm already doing since some months: since the REST term has been raped during the last decade I am now talking about HTTP (doh) and Hypermedia API. It's quite disappointing, but you'll notice that a few people will ask you "why hypermedia" or "why not REST", so, at least, they're gonna dig a bit about that. > > The term "hypermedia service" stands for** : > > - support of addressable resources > - standard-conform usage of the uniform interface > - (hopefully) a cleaner interface/interaction design > - etc... > > IMHO, "hypermedia" is still a stepchild. It should be some kind of quality > characteristic. What do you think? > > Jakob > > * I´m glad, the microsoft guys label their latest framework Web API (as > opposed to the former WCF "REST" Starter Kit...) > ** at least more than "REST" > > > -- Nadalin Alessandro www.odino.org www.twitter.com/_odino_
> 'PUT is an idempotent update of state' No, PUT is 'a request that an entity be stored at a location. This entity is considered the most current version.' > What does the former benefit the web that the latter does not? Dealing with whole updates has many, many less edge cases, for one...for a certain definition of 'simple,' it's much simpler.
Mike Kelly wrote: > > If preventing partials is a requirement for a given application, then > that application can define its own semantics specifying the > transition rules. The application in question provides the necessary > shared understanding for the client and server to understand the > sender intent. > IOW, the shared understanding is library-based instead of network-based. If those shared understandings are standardized, shouldn't they be network-based, if we're talking about REST? If sender intent isn't self-descriptive such that it's visible to the network, shouldn't that clue you in that you're talking about some other architectural style? > > Granted, this leaves out intermediaries. Is it necessary for > intermediaries to be able to distinguish between whether or not a PUT > is full or partial? > I don't understand how you can say "granted, this leaves out intermediaries" while still insisting that you're talking about REST. The answer to your question is yes, if access control is method-based. I may have a wiki that only allows authors to PUT edits to the page, allows anyone to POST a message discussing the page, and allows members to PATCH the page with tags or ratings. I can't do that in any logical or maintainable fashion if my application protocol's methods are based on idempotency rather than sender intent. Basing the protocol on sender intent doesn't have these problems, probably why Internet protocols are designed around it. -Eric
Mike Kelly wrote: > > afaict, that seems to fit in ok: > > GET: "apply this safe idempotent request" > DELETE: "apply this non-safe idempotent delete request" > PUT: "apply this non-safe idempotent update request" > POST: "apply this non-safe non-idempotent request" > Fits with your view, obviously. You're suggesting some other architecture, not REST, where self-descriptive messaging does not apply. You reject out-of-hand the notion that my intent when uploading a new file is replacement, and tell me my intent is 'non-safe idempotent update' which may or may not mean replacement. But I assure you, my intent is to replace, even if it's only an update. You also reject out-of-hand any sender intent to patch, in favor of nondescriptive POST. While arguing that HTTP should only have four methods... I'm not one to suggest moving with the herd, but could you at least not reject the fundamentals while trying to convince us they're not rooted in REST? Seems like... trolling. -Eric
On Dec 19, 2011, at 3:00 AM, Mike Kelly wrote: > On Mon, Dec 19, 2011 at 12:16 AM, Roy T. Fielding <fielding@...> wrote: >> On Dec 15, 2011, at 2:02 AM, Mike Kelly wrote: >>> .. and PUTs 'complete replace' semantics allow for.. ? >> >> The ability to write patch representations onto a server so that those >> patches can later be retrieved by others. >> > > No, that's already made able by virtue of PUT being an 'update', > right? No, if the update looks like a patch and PUT is allowed to perform patch semantics then the server will attempt to perform those semantics and fail to store the patch. Likewise, if the client expects a partial update to work with PUT and sends that message to an HTTP server that doesn't implement partial PUTs (i.e., all of them), then the current representation will be entirely replaced with the partial content regardless of your opinion on how PUT might be specified otherwise. > I was asking specifically about httpbis' overspecified: > > 'PUT is an idempotent update of state that is a complete replacement' > > vs the more generally applicable and succinct: > > 'PUT is an idempotent update of state' And you have been answered, many times. PUT means PUT. There are no partial updates in PUT. There was a half-assed attempt to add those semantics by committee in the midst of standardizing HTTP, but that attempt failed because PUT's existing semantics had already been deployed and we can't graft partial updates on top of existing replace semantics. Period. End of story. Hence, PATCH was defined in 1995 (and finally standardized much later because the WebDAV group was lazy). This answer is final. If anyone implements it differently in Rails, then Rails will be neither compliant with HTTP nor compliant with REST. Whether that matters to anyone developing Rails is besides the point. ....Roy
Hi, On Mon, Dec 19, 2011 at 12:22 PM, Alessandro Nadalin <alessandro.nadalin@...> wrote: > On Sat, Dec 17, 2011 at 10:17 PM, Jakob Strauch <jakob.strauch@...> wrote: >> We all know, many APIs and some frameworks* labelled with the term "REST", do not support hypermedia (or less so called RESTful) mechanisms. The term is (and likely will be) very overloaded in the IT industrie. >> >> I think, it is time to push forward terms like "hypermedia service/API" or "hypermedia-aware client". Instead of saying "hey i have a RESTful API", tell the people "i have a hypermedia API!". > +27.000 :) Yes. that's nice. > I thought I was the only one saying "hypermedia-aware clients". > That's what I'm already doing since some months: since the REST term has been raped during the last decade I am now talking about HTTP (doh) and Hypermedia API. I've thought about this naming issue too, and agree that a lot of the discussions regarding REST-fulness of API:s are rather ... tiring. Here's my 5c on the name game. The HT-part of HTTP is actually a bit of a misnomer. Hypertext has already been superseeded by hypermedia. mayby the time has come to talk of Hyperresources? I do think "hypermedia service/API" is nice but what about: Hyperlinked Resource Transfer Service I do realize that HyRTS is not a good acronym. Maybe we could just call it a Web Service? Oh wait, that name has already been highjacked by a technology that's not really Web-oriented ...hmm ... I think I'll go back to coding. /Paul -- Paul Cohen www.seibostudios.se mobile: +46 730 787 035 e-mail: paul.cohen@...
Mike Kelly wrote: > > > > > They are with PATCH, or by making them sub-resources and using PUT, > > I hope is your takeaway... > > If creating sub-resources was a sufficient solution, why was PATCH > created? > For when it isn't a sufficient solution, i.e. the majority case where a patch isn't expected to be idempotent. > > > > > Because now you'd have two methods describing the same sender > > intent of partial update. That some partial-update interactions > > may be idempotent is interesting, but it's an edge case, so I don't > > understand the need to optimize for it. > > I don't think it's as edge as you think it is. > Good grief, we're talking about optimizing upstream traffic, which is always an edge case. The reasons for what you don't like being told, is that REST optimizes the hell out of GET. It just isn't a big deal to create a subresource, because then it's individually cacheable, which would seem to be a benefit if you want idempotent partial updates. > > Mobile clients are becoming more and more prevalent; they operate on a > relatively inefficient network where requests need to be as lean as > possible (partial) and easily re-issued(idempotent). > I believe Roy took that into account in his thesis, REST is designed for just such a problem domain. > > All that is required to make this possible if for PUT semantics to be > more general - i.e. drop the full replace semantic, and make no > assertion about the partial/fullness one way or the other. > Or make them subresources, at least you've given a reason (fwiw) you can't use PATCH. > > > I believe that sender intent stops at partial update, > > without considering idempotency, and I see no advantage in adding > > complexity to the protocol to take that into account -- especially > > not if there's already a RESTful solution to the problem... > > Removing the constraint which specifies 'PUT requests must be full > replace' is less complex, not more. > Your response only makes sense when you take me out of context. I said adding another method for this edge case would increase complexity. > > > > > If it's important to intermediaries that your partial update be > > considered idempotent, then mint a URI for a subresource and use > > PUT. I should think that approach needs to be falsified before > > lobbying for a new method. > > The problem with this advice is that it's not a practical solution: > the requirements for what constitutes efficient granularity can differ > between clients, can change over time, and are very difficult to get > right up front. > Disagree. It's a common design pattern in REST. Your argument amounts to, "it's a tradeoff". Well, yes, sometimes RESTful design decisions are, but they do have benefits. In this case, making the update idempotent, and if it's the only thing of interest to change, then it's a benefit to make it its own resource for cacheability. Still not seeing the problem, here. > > Also, doing so reduces the visibility of interaction in terms of > smearing shared state across several resources, this makes mechanisms > like cache invalidation far more difficult to leverage - it's > redundant and costly. > Only if by visibility, you mean by your definition of that term vs. Roy's. What needs to be visible is the interaction itself; if it's a subresource using PUT, that's very visible on the wire, and how it fits into the application is completely irrelevant because implementation specifics are hidden behind the uniform interface. -Eric
On Mon, Dec 19, 2011 at 12:32 PM, Roy T. Fielding <fielding@...> wrote: > On Dec 19, 2011, at 3:00 AM, Mike Kelly wrote: > >> On Mon, Dec 19, 2011 at 12:16 AM, Roy T. Fielding <fielding@...> wrote: >>> On Dec 15, 2011, at 2:02 AM, Mike Kelly wrote: >>>> .. and PUTs 'complete replace' semantics allow for.. ? >>> >>> The ability to write patch representations onto a server so that those >>> patches can later be retrieved by others. >>> >> >> No, that's already made able by virtue of PUT being an 'update', >> right? > > No, if the update looks like a patch and PUT is allowed to perform > patch semantics then the server will attempt to perform those semantics > and fail to store the patch. Likewise, if the client expects a partial > update to work with PUT and sends that message to an HTTP server that > doesn't implement partial PUTs (i.e., all of them) It's not all of them though, is it? 2616 is nowhere near clear about this.. if it was, why would there have been a requirement you to try and "clarify" it? > then the current > representation will be entirely replaced with the partial content > regardless of your opinion on how PUT might be specified otherwise. So your argument is based on the premise that clients go around PUTing representations to servers willy-nilly with no application semantics understood up front whatsoever? I have not seen anything in practice which supports that premise. >> I was asking specifically about httpbis' overspecified: >> >> 'PUT is an idempotent update of state that is a complete replacement' >> >> vs the more generally applicable and succinct: >> >> 'PUT is an idempotent update of state' > > And you have been answered, many times. PUT means PUT. There are no > partial updates in PUT. Sorry, that's much of an answer. > There was a half-assed attempt to add those > semantics by committee in the midst of standardizing HTTP, but that > attempt failed because PUT's existing semantics had already been deployed > and we can't graft partial updates on top of existing replace semantics. > Period. End of story. Again; you must be aware that 2616 is not clear enough on this front, otherwise why did you work on clarifying it? Furthermore, there is no evidence - as in zero - of intermediary mechanisms/plumbing that relies on PUT having replace semantics. I have asked for an example several times in this thread. So there is absolutely no evidence that changing this semantic would *actually* have a detrimental effect on any web infrastructure. Specific applications and their use of PUT would not be negatively effected by generalising PUTs definition; the only requirement would be that those applications take on the responsbility of specifying their additional replace-only semantics for PUT in their context. Most applications do this anyway, for example: http://wiki.basho.com/Keys-and-Objects.html "When performing any fetch or update operation in Riak, the entire Riak Object must be retrieved or modified; there are no partial fetches or updates." > Hence, PATCH was defined in 1995 (and finally > standardized much later because the WebDAV group was lazy). > > This answer is final. If anyone implements it differently in Rails, > then Rails will be neither compliant with HTTP nor compliant with REST. > Whether that matters to anyone developing Rails is besides the point. Well as it stands I don't think 2616 is anywhere clear enough for that be an objective truth. Granted, this will become a truth if HTTPbis pushes out its current over-specified interpretation of PUT - I would contend that this is an unnecessary change for the worse, but given your advantageous position; if you want to insist "that's just the way it's going to be", there's not a lot I can do about it, is there? :) Cheers, Mike
On Dec 19, 2011, at 2:08 PM, Mike Kelly wrote: > It's not all of them though, is it? 2616 is nowhere near clear about > this.. if it was, why would there have been a requirement you to try > and "clarify" it? Ermm... , don't you think that Roy is sort of an authoritative source regarding the question of what the authors of 2616 had in mind when they wrote it? Jan
On 2011-12-19 13:32, Roy T. Fielding wrote: > ... > Period. End of story. Hence, PATCH was defined in 1995 (and finally > standardized much later because the WebDAV group was lazy). > ... What exactly does this have to do with WebDAV? Do you think WebDAV should have defined PATCH? Best regards, Julian
On Mon, Dec 19, 2011 at 2:05 PM, Jan Algermissen <jan.algermissen@...> wrote: > > On Dec 19, 2011, at 2:08 PM, Mike Kelly wrote: > >> It's not all of them though, is it? 2616 is nowhere near clear about >> this.. if it was, why would there have been a requirement you to try >> and "clarify" it? > > Ermm... , don't you think that Roy is sort of an authoritative source regarding the question of what the authors of 2616 had in mind when they wrote it? > Probably. What does that have to do with whether the resulting specification was actually clear or not?
I am at the beginning of a project where we need a web/http-based API for reading and creating complex objects (more on that later). Right now I need to decide wether to do this with a SOAP service or a REST API. SOAP has the benefits of being well understood and having lots of tooling thereby making it easy to get up and running. REST has some more long term benefits but requires a bit more work up front. I would rather do it with REST but doing REST right puts some burden on the client that I struggle with. It is extremely easy to create a SOAP service in .NET using WCF and create a proxy client that works with it. Creating a REST client requires a bit more work if it must know how to follow links via link-relations, read (x)forms definitions on the fly and so on. The API works with case files in a case management system (*). A case file contains dossiers, dossiers contain documents, and dossiers can be associated with various persons and/or organizations (parties) - like for instance a responsible party. So I need to read case files, dossiers, binary documents and parties. I also need to be able to create those entities and create the hierarchical relationship between them. I expect to have web ressources for the entities: case file, dossier, document, party ... and, no, I am not going to expose my internal entity types as web objects, there will be some kind of transformation to a public facing representation. My first design question is; what content type should I use for representing these entities? By using HTML I can use <form> and <a> as hypermedia controls - but then there is no well known machine readable representation of the entities except RDFa which I have found rather difficult to use due to lack of tooling in .NET. I could also use some proprietary XML variation. That would solve the entity representation but leave me without well known hypermedia controls. That could although be fixed by allowing elements like ATOM links and such in the XML. Initial scenario: the API must support creation of a case file, adding a dossier, and associate various parties to it. That can be a two step operation - 1) post case file data, 2) post dossier data with reference to (1) and included party references. The result would probably be a "403 created" with a link to the new case file resource. Second scenario: add additional binary documents to the dossier. That should be easy with the dossier representation having a link to its document collection such that the client can post new documents to the collection. It should be possible to represent a document upload using only standard HTTP headers, posted content type, and binary data in the body. My biggest concern is what content types to use in order to make it as easy to use from .NET as possible while still being a "real" REST API? What I would like to be able to do, is to write code like this when creating for instance a new case file: // Assume "CaseFile" corresponds to the public facing content type CaseFile f = new CaseFile(); f.Title = "A new case file"; f.OtherProperties = ...; Uri createCaseFileUri = ... a way to fetch the URI - how? ... Uri newCaseFileUri = createCaseFileUri.SerializeAndPostSomeData(f); Where would you guys start with such a project? What tools would you use for a C# .NET client (the server is built on Open Rasta)? Thanks, J�rn (*) I have mentioned this some time ago on this mailing list, but the project has been sleeping for some time, so now time has come to re-think bits of it.
On Dec 19, 2011, at 6:05 AM, Julian Reschke wrote: > On 2011-12-19 13:32, Roy T. Fielding wrote: >> ... >> Period. End of story. Hence, PATCH was defined in 1995 (and finally >> standardized much later because the WebDAV group was lazy). >> ... > > What exactly does this have to do with WebDAV? Do you think WebDAV should have defined PATCH? Yes, it was part of the original authoring task when the DAV WG was initiated. That's why I removed PATCH from RFC2068 (because it was supposedly going to be developed further by DAV and we did not want a conflicting definition). ....Roy
One option would be to use XML variant of Hal. There is a .net based parser available here http://hal.codeplex.com. Hal is a generic media type but you can convey specific semantics using extended link relations. It is easy to specify hierarchies of resources and has a standard way of representing links. I also built a path syntax for querying into the parser but that is not yet part of the spec. Darrel On Mon, Dec 19, 2011 at 3:08 PM, Jørn Wildt <jw@...> wrote: > ** > > > I am at the beginning of a project where we need a web/http-based API for > reading and creating complex objects (more on that later). Right now I > need > to decide wether to do this with a SOAP service or a REST API. > > SOAP has the benefits of being well understood and having lots of tooling > thereby making it easy to get up and running. REST has some more long term > benefits but requires a bit more work up front. > > I would rather do it with REST but doing REST right puts some burden on > the > client that I struggle with. It is extremely easy to create a SOAP service > in .NET using WCF and create a proxy client that works with it. Creating a > REST client requires a bit more work if it must know how to follow links > via > link-relations, read (x)forms definitions on the fly and so on. > > The API works with case files in a case management system (*). A case file > contains dossiers, dossiers contain documents, and dossiers can be > associated with various persons and/or organizations (parties) - like for > instance a responsible party. > > So I need to read case files, dossiers, binary documents and parties. I > also > need to be able to create those entities and create the hierarchical > relationship between them. I expect to have web ressources for the > entities: > case file, dossier, document, party ... and, no, I am not going to expose > my > internal entity types as web objects, there will be some kind of > transformation to a public facing representation. > > My first design question is; what content type should I use for > representing > these entities? By using HTML I can use <form> and <a> as hypermedia > controls - but then there is no well known machine readable representation > of the entities except RDFa which I have found rather difficult to use due > to lack of tooling in .NET. > > I could also use some proprietary XML variation. That would solve the > entity > representation but leave me without well known hypermedia controls. That > could although be fixed by allowing elements like ATOM links and such in > the > XML. > > Initial scenario: the API must support creation of a case file, adding a > dossier, and associate various parties to it. That can be a two step > operation - 1) post case file data, 2) post dossier data with reference to > (1) and included party references. The result would probably be a "403 > created" with a link to the new case file resource. > > Second scenario: add additional binary documents to the dossier. That > should > be easy with the dossier representation having a link to its document > collection such that the client can post new documents to the collection. > It > should be possible to represent a document upload using only standard HTTP > headers, posted content type, and binary data in the body. > > My biggest concern is what content types to use in order to make it as > easy > to use from .NET as possible while still being a "real" REST API? > > What I would like to be able to do, is to write code like this when > creating > for instance a new case file: > > // Assume "CaseFile" corresponds to the public facing content type > CaseFile f = new CaseFile(); > f.Title = "A new case file"; > f.OtherProperties = ...; > > Uri createCaseFileUri = ... a way to fetch the URI - how? ... > Uri newCaseFileUri = createCaseFileUri.SerializeAndPostSomeData(f); > > Where would you guys start with such a project? What tools would you use > for > a C# .NET client (the server is built on Open Rasta)? > > Thanks, Jørn > > (*) I have mentioned this some time ago on this mailing list, but the > project has been sleeping for some time, so now time has come to re-think > bits of it. > > >
Did you try the WCF Web API [1]? It is still a preview release, but work already very nicely. Btw. i´m using also hal+xml, but i worked on an own serializer (and formatter for the web api) with a much more convient way to work with hal. I will release a first version soon... Cheers, Jakob [1] http://wcf.codeplex.com/wikipage?title=WCF%20HTTP --- In rest-discuss@yahoogroups.com, Darrel Miller <darrel.miller@...> wrote: > > One option would be to use XML variant of Hal. There is a .net based > parser available here http://hal.codeplex.com. Hal is a generic media > type but you can convey specific semantics using extended link relations. > It is easy to specify hierarchies of resources and has a standard way of > representing links. I also built a path syntax for querying into the > parser but that is not yet part of the spec. > > Darrel > > > On Mon, Dec 19, 2011 at 3:08 PM, Jørn Wildt <jw@...> wrote: > > > ** > > > > > > I am at the beginning of a project where we need a web/http-based API for > > reading and creating complex objects (more on that later). Right now I > > need > > to decide wether to do this with a SOAP service or a REST API. > > > > SOAP has the benefits of being well understood and having lots of tooling > > thereby making it easy to get up and running. REST has some more long term > > benefits but requires a bit more work up front. > > > > I would rather do it with REST but doing REST right puts some burden on > > the > > client that I struggle with. It is extremely easy to create a SOAP service > > in .NET using WCF and create a proxy client that works with it. Creating a > > REST client requires a bit more work if it must know how to follow links > > via > > link-relations, read (x)forms definitions on the fly and so on. > > > > The API works with case files in a case management system (*). A case file > > contains dossiers, dossiers contain documents, and dossiers can be > > associated with various persons and/or organizations (parties) - like for > > instance a responsible party. > > > > So I need to read case files, dossiers, binary documents and parties. I > > also > > need to be able to create those entities and create the hierarchical > > relationship between them. I expect to have web ressources for the > > entities: > > case file, dossier, document, party ... and, no, I am not going to expose > > my > > internal entity types as web objects, there will be some kind of > > transformation to a public facing representation. > > > > My first design question is; what content type should I use for > > representing > > these entities? By using HTML I can use <form> and <a> as hypermedia > > controls - but then there is no well known machine readable representation > > of the entities except RDFa which I have found rather difficult to use due > > to lack of tooling in .NET. > > > > I could also use some proprietary XML variation. That would solve the > > entity > > representation but leave me without well known hypermedia controls. That > > could although be fixed by allowing elements like ATOM links and such in > > the > > XML. > > > > Initial scenario: the API must support creation of a case file, adding a > > dossier, and associate various parties to it. That can be a two step > > operation - 1) post case file data, 2) post dossier data with reference to > > (1) and included party references. The result would probably be a "403 > > created" with a link to the new case file resource. > > > > Second scenario: add additional binary documents to the dossier. That > > should > > be easy with the dossier representation having a link to its document > > collection such that the client can post new documents to the collection. > > It > > should be possible to represent a document upload using only standard HTTP > > headers, posted content type, and binary data in the body. > > > > My biggest concern is what content types to use in order to make it as > > easy > > to use from .NET as possible while still being a "real" REST API? > > > > What I would like to be able to do, is to write code like this when > > creating > > for instance a new case file: > > > > // Assume "CaseFile" corresponds to the public facing content type > > CaseFile f = new CaseFile(); > > f.Title = "A new case file"; > > f.OtherProperties = ...; > > > > Uri createCaseFileUri = ... a way to fetch the URI - how? ... > > Uri newCaseFileUri = createCaseFileUri.SerializeAndPostSomeData(f); > > > > Where would you guys start with such a project? What tools would you use > > for > > a C# .NET client (the server is built on Open Rasta)? > > > > Thanks, Jørn > > > > (*) I have mentioned this some time ago on this mailing list, but the > > project has been sleeping for some time, so now time has come to re-think > > bits of it. > > > > > > >
Thanks for the suggestions. I am investigating the various tools (including RestFulie for .NET). /Jørn
Lets assume I have proprietary XML format for a "dossier". Now I want to add links to documents and involved parties.
I can do:
<Dossier>
<Link rel="document" title="Doc1" href="..."/>
<Link rel="party" title="Party12" href="..."/>
<Link rel="document" title="Doc2" href="..."/>
<Link rel="party" title="Party2" href="..."/>
</Dossier>
Or
<Dossier>
<Links>
<Link rel="document" title="Doc1" href="..."/>
<Link rel="party" title="Party12" href="..."/>
<Link rel="document" title="Doc2" href="..."/>
<Link rel="party" title="Party2" href="..."/>
</Links>
</Dossier>
or I can do:
<Dossier>
<Documents>
<Link rel="document" title="Doc1" href="..."/>
<Link rel="document" title="Doc2" href="..."/>
</Documents>
<Parties>
<Link rel="party" title="Party12" href="..."/>
<Link rel="party" title="Party2" href="..."/>
</Parties>
</Dossier>
What are you considering "best practice"?
Thanks, Jørn
Sebastian Lambla and I were discussing this at QCON London. The advantage of hypermedia service/api as a term is that is not something the pop culture REST crowd cares about. The downside of attaching everything to that term ls that hypermedia is just one of several constraints, and just because you have a hypermedia based api, it does not mean you are RESTFul, though the reverse is true. I do see value in some new differentiated term based on the pop culture usage / misconceptions around REST. It would be so much easier of HTTP api or web api was the term used for those services, but REST is too sexy. Sent from my Windows Phone ------------------------------ From: Paul Cohen Sent: 12/19/2011 4:36 AM To: Alessandro Nadalin Cc: Jakob Strauch; rest-discuss@yahoogroups.com Subject: Re: [rest-discuss] Less REST, more Hypermedia! Hi, On Mon, Dec 19, 2011 at 12:22 PM, Alessandro Nadalin <alessandro.nadalin@...> wrote: > On Sat, Dec 17, 2011 at 10:17 PM, Jakob Strauch <jakob.strauch@...> wrote: >> We all know, many APIs and some frameworks* labelled with the term "REST", do not support hypermedia (or less so called RESTful) mechanisms. The term is (and likely will be) very overloaded in the IT industrie. >> >> I think, it is time to push forward terms like "hypermedia service/API" or "hypermedia-aware client". Instead of saying "hey i have a RESTful API", tell the people "i have a hypermedia API!". > +27.000 :) Yes. that's nice. > I thought I was the only one saying "hypermedia-aware clients". > That's what I'm already doing since some months: since the REST term has been raped during the last decade I am now talking about HTTP (doh) and Hypermedia API. I've thought about this naming issue too, and agree that a lot of the discussions regarding REST-fulness of API:s are rather ... tiring. Here's my 5c on the name game. The HT-part of HTTP is actually a bit of a misnomer. Hypertext has already been superseeded by hypermedia. mayby the time has come to talk of Hyperresources? I do think "hypermedia service/API" is nice but what about: Hyperlinked Resource Transfer Service I do realize that HyRTS is not a good acronym. Maybe we could just call it a Web Service? Oh wait, that name has already been highjacked by a technology that's not really Web-oriented ...hmm ... I think I'll go back to coding. /Paul -- Paul Cohen www.seibostudios.se mobile: +46 730 787 035 e-mail: paul.cohen@...
Hi Jakob, Funny you should mention it: this was a punchline at the end of my RESTFest keynote in August. You can jump in to 3:30 of this video of it for this exact discussion: http://vimeo.com/27942428 I used the phrase "REST has jumped the shark" to describe the fact that common understanding of REST is "use HTTP as your application protocol" and that's it. No amount of "you're doing REST wrong" or "you're doing REST-minus-minus" will actually make headway against this now. On the other hand, "hypermedia APIs" are something new to most people--it's actually easier to get people to look at something new they're not doing than to try to tell them they're not doing all of what they thought they were doing. I think this is a better approach to get people to look at "the rest of REST" they're missing. Jon ........ Jon Moore Comcast Interactive Media From: Jakob Strauch <jakob.strauch@...<mailto:jakob.strauch@...>> Date: Sat, 17 Dec 2011 21:17:42 +0000 To: <rest-discuss@yahoogroups.com<mailto:rest-discuss@yahoogroups.com>> Subject: [rest-discuss] Less REST, more Hypermedia! We all know, many APIs and some frameworks* labelled with the term "REST", do not support hypermedia (or less so called RESTful) mechanisms. The term is (and likely will be) very overloaded in the IT industrie. I think, it is time to push forward terms like "hypermedia service/API" or "hypermedia-aware client". Instead of saying "hey i have a RESTful API", tell the people "i have a hypermedia API!". The term "hypermedia service" stands for** : - support of addressable resources - standard-conform usage of the uniform interface - (hopefully) a cleaner interface/interaction design - etc... IMHO, "hypermedia" is still a stepchild. It should be some kind of quality characteristic. What do you think? Jakob * I´m glad, the microsoft guys label their latest framework Web API (as opposed to the former WCF "REST" Starter Kit...) ** at least more than "REST"
Jakob, is there something in OpenRasta you believe would be better solved using WebAPI? Sebastien ________________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Jakob Strauch [jakob.strauch@...] Sent: 20 December 2011 07:19 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Re: Best approach for a complex API? Did you try the WCF Web API [1]? It is still a preview release, but work already very nicely. Btw. i´m using also hal+xml, but i worked on an own serializer (and formatter for the web api) with a much more convient way to work with hal. I will release a first version soon... Cheers, Jakob [1] http://wcf.codeplex.com/wikipage?title=WCF%20HTTP --- In rest-discuss@yahoogroups.com, Darrel Miller <darrel.miller@...> wrote: > > One option would be to use XML variant of Hal. There is a .net based > parser available here http://hal.codeplex.com. Hal is a generic media > type but you can convey specific semantics using extended link relations. > It is easy to specify hierarchies of resources and has a standard way of > representing links. I also built a path syntax for querying into the > parser but that is not yet part of the spec. > > Darrel > > > On Mon, Dec 19, 2011 at 3:08 PM, Jørn Wildt <jw@...> wrote: > > > ** > > > > > > I am at the beginning of a project where we need a web/http-based API for > > reading and creating complex objects (more on that later). Right now I > > need > > to decide wether to do this with a SOAP service or a REST API. > > > > SOAP has the benefits of being well understood and having lots of tooling > > thereby making it easy to get up and running. REST has some more long term > > benefits but requires a bit more work up front. > > > > I would rather do it with REST but doing REST right puts some burden on > > the > > client that I struggle with. It is extremely easy to create a SOAP service > > in .NET using WCF and create a proxy client that works with it. Creating a > > REST client requires a bit more work if it must know how to follow links > > via > > link-relations, read (x)forms definitions on the fly and so on. > > > > The API works with case files in a case management system (*). A case file > > contains dossiers, dossiers contain documents, and dossiers can be > > associated with various persons and/or organizations (parties) - like for > > instance a responsible party. > > > > So I need to read case files, dossiers, binary documents and parties. I > > also > > need to be able to create those entities and create the hierarchical > > relationship between them. I expect to have web ressources for the > > entities: > > case file, dossier, document, party ... and, no, I am not going to expose > > my > > internal entity types as web objects, there will be some kind of > > transformation to a public facing representation. > > > > My first design question is; what content type should I use for > > representing > > these entities? By using HTML I can use <form> and <a> as hypermedia > > controls - but then there is no well known machine readable representation > > of the entities except RDFa which I have found rather difficult to use due > > to lack of tooling in .NET. > > > > I could also use some proprietary XML variation. That would solve the > > entity > > representation but leave me without well known hypermedia controls. That > > could although be fixed by allowing elements like ATOM links and such in > > the > > XML. > > > > Initial scenario: the API must support creation of a case file, adding a > > dossier, and associate various parties to it. That can be a two step > > operation - 1) post case file data, 2) post dossier data with reference to > > (1) and included party references. The result would probably be a "403 > > created" with a link to the new case file resource. > > > > Second scenario: add additional binary documents to the dossier. That > > should > > be easy with the dossier representation having a link to its document > > collection such that the client can post new documents to the collection. > > It > > should be possible to represent a document upload using only standard HTTP > > headers, posted content type, and binary data in the body. > > > > My biggest concern is what content types to use in order to make it as > > easy > > to use from .NET as possible while still being a "real" REST API? > > > > What I would like to be able to do, is to write code like this when > > creating > > for instance a new case file: > > > > // Assume "CaseFile" corresponds to the public facing content type > > CaseFile f = new CaseFile(); > > f.Title = "A new case file"; > > f.OtherProperties = ...; > > > > Uri createCaseFileUri = ... a way to fetch the URI - how? ... > > Uri newCaseFileUri = createCaseFileUri.SerializeAndPostSomeData(f); > > > > Where would you guys start with such a project? What tools would you use > > for > > a C# .NET client (the server is built on Open Rasta)? > > > > Thanks, Jørn > > > > (*) I have mentioned this some time ago on this mailing list, but the > > project has been sleeping for some time, so now time has come to re-think > > bits of it. > > > > > > > ------------------------------------ Yahoo! Groups Links
AFAIK, openrasta is more stable (remember wcf web api is still developer preview). While i didnt spend much time with openrasta, i can say, that the usage of the web api framework fits nice in to previous "frameworks", i´ve worked with. It covers basic http scenarios and also RESTful design. More important: it is community-driven, like openrasta. Maybe i could tell you more, if i would have play a little more with openrasta. I´ve seen some pretty nice stuff in your talk on infoQ. But i´m not the right guy to compare both frameworks... Jakob --- In rest-discuss@yahoogroups.com, Sebastien Lambla <seb@...> wrote: > > Jakob, > > is there something in OpenRasta you believe would be better solved using WebAPI? > > Sebastien > ________________________________________ > From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Jakob Strauch [jakob.strauch@...] > Sent: 20 December 2011 07:19 > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] Re: Best approach for a complex API? > > Did you try the WCF Web API [1]? It is still a preview release, but work already very nicely. > > Btw. i´m using also hal+xml, but i worked on an own serializer (and formatter for the web api) with a much more convient way to work with hal. I will release a first version soon... > > Cheers, > Jakob > > > [1] http://wcf.codeplex.com/wikipage?title=WCF%20HTTP > > > --- In rest-discuss@yahoogroups.com, Darrel Miller <darrel.miller@> wrote: > > > > One option would be to use XML variant of Hal. There is a .net based > > parser available here http://hal.codeplex.com. Hal is a generic media > > type but you can convey specific semantics using extended link relations. > > It is easy to specify hierarchies of resources and has a standard way of > > representing links. I also built a path syntax for querying into the > > parser but that is not yet part of the spec. > > > > Darrel > > > > > > On Mon, Dec 19, 2011 at 3:08 PM, Jørn Wildt <jw@> wrote: > > > > > ** > > > > > > > > > I am at the beginning of a project where we need a web/http-based API for > > > reading and creating complex objects (more on that later). Right now I > > > need > > > to decide wether to do this with a SOAP service or a REST API. > > > > > > SOAP has the benefits of being well understood and having lots of tooling > > > thereby making it easy to get up and running. REST has some more long term > > > benefits but requires a bit more work up front. > > > > > > I would rather do it with REST but doing REST right puts some burden on > > > the > > > client that I struggle with. It is extremely easy to create a SOAP service > > > in .NET using WCF and create a proxy client that works with it. Creating a > > > REST client requires a bit more work if it must know how to follow links > > > via > > > link-relations, read (x)forms definitions on the fly and so on. > > > > > > The API works with case files in a case management system (*). A case file > > > contains dossiers, dossiers contain documents, and dossiers can be > > > associated with various persons and/or organizations (parties) - like for > > > instance a responsible party. > > > > > > So I need to read case files, dossiers, binary documents and parties. I > > > also > > > need to be able to create those entities and create the hierarchical > > > relationship between them. I expect to have web ressources for the > > > entities: > > > case file, dossier, document, party ... and, no, I am not going to expose > > > my > > > internal entity types as web objects, there will be some kind of > > > transformation to a public facing representation. > > > > > > My first design question is; what content type should I use for > > > representing > > > these entities? By using HTML I can use <form> and <a> as hypermedia > > > controls - but then there is no well known machine readable representation > > > of the entities except RDFa which I have found rather difficult to use due > > > to lack of tooling in .NET. > > > > > > I could also use some proprietary XML variation. That would solve the > > > entity > > > representation but leave me without well known hypermedia controls. That > > > could although be fixed by allowing elements like ATOM links and such in > > > the > > > XML. > > > > > > Initial scenario: the API must support creation of a case file, adding a > > > dossier, and associate various parties to it. That can be a two step > > > operation - 1) post case file data, 2) post dossier data with reference to > > > (1) and included party references. The result would probably be a "403 > > > created" with a link to the new case file resource. > > > > > > Second scenario: add additional binary documents to the dossier. That > > > should > > > be easy with the dossier representation having a link to its document > > > collection such that the client can post new documents to the collection. > > > It > > > should be possible to represent a document upload using only standard HTTP > > > headers, posted content type, and binary data in the body. > > > > > > My biggest concern is what content types to use in order to make it as > > > easy > > > to use from .NET as possible while still being a "real" REST API? > > > > > > What I would like to be able to do, is to write code like this when > > > creating > > > for instance a new case file: > > > > > > // Assume "CaseFile" corresponds to the public facing content type > > > CaseFile f = new CaseFile(); > > > f.Title = "A new case file"; > > > f.OtherProperties = ...; > > > > > > Uri createCaseFileUri = ... a way to fetch the URI - how? ... > > > Uri newCaseFileUri = createCaseFileUri.SerializeAndPostSomeData(f); > > > > > > Where would you guys start with such a project? What tools would you use > > > for > > > a C# .NET client (the server is built on Open Rasta)? > > > > > > Thanks, Jørn > > > > > > (*) I have mentioned this some time ago on this mailing list, but the > > > project has been sleeping for some time, so now time has come to re-think > > > bits of it. > > > > > > > > > > > > > > > > ------------------------------------ > > Yahoo! Groups Links >
Just to throw another candidate approach into the mix, there's restfulobjects.org. Like HAL, this serves up generic media types, though I think they are rather more fine-grained. It can also deal with pretty much *any* domain model (certainly the one that you sketch out). RO is currently being implemented on ASP.NET MVC (over the Web API), and will be opensourced in the new year. Dan
i´ve looked briefly over the spec. at first sight, it looks a little bit like ODATA [1] or something... btw. the overview picture in the spec doesnt help much. it´s a little bit overloaded and colorful... Do you know the moody paper "the physics of notation" [2]? Jakob [1] http://www.odata.org/ [2] http://dl.acm.org/citation.cfm?id=1810442 --- In rest-discuss@yahoogroups.com, Dan Haywood <dan@...> wrote: > > Just to throw another candidate approach into the mix, there's > restfulobjects.org. Like HAL, this serves up generic media types, though I > think they are rather more fine-grained. It can also deal with pretty much > *any* domain model (certainly the one that you sketch out). > > RO is currently being implemented on ASP.NET MVC (over the Web API), and > will be opensourced in the new year. > > Dan >
I really disagree with this approach. REST is not about domain models, it is about resources and HTTP. This is leaking your implementation over HTTP and creating unnecessary coupling that REST is there to present. Sent from my Windows Phone ------------------------------ From: Dan Haywood Sent: 12/20/2011 2:50 PM To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Re: Best approach for a complex API? Just to throw another candidate approach into the mix, there's restfulobjects.org. Like HAL, this serves up generic media types, though I think they are rather more fine-grained. It can also deal with pretty much *any* domain model (certainly the one that you sketch out). RO is currently being implemented on ASP.NET MVC (over the Web API), and will be opensourced in the new year. Dan
Prevent not present. Sent from my Windows Phone ------------------------------ From: Glenn Block Sent: 12/21/2011 1:34 AM To: Dan Haywood; rest-discuss@yahoogroups.com Subject: RE: [rest-discuss] Re: Best approach for a complex API? I really disagree with this approach. REST is not about domain models, it is about resources and HTTP. This is leaking your implementation over HTTP and creating unnecessary coupling that REST is there to present. Sent from my Windows Phone ------------------------------ From: Dan Haywood Sent: 12/20/2011 2:50 PM To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Re: Best approach for a complex API? Just to throw another candidate approach into the mix, there's restfulobjects.org. Like HAL, this serves up generic media types, though I think they are rather more fine-grained. It can also deal with pretty much *any* domain model (certainly the one that you sketch out). RO is currently being implemented on ASP.NET MVC (over the Web API), and will be opensourced in the new year. Dan
+1 fwiw, this is not an objective of HAL at all Cheers, Mike On Wed, Dec 21, 2011 at 9:34 AM, Glenn Block <glenn.block@...> wrote: > > > I really disagree with this approach. REST is not about domain models, it > is about resources and HTTP. This is leaking your implementation over HTTP > and creating unnecessary coupling that REST is there to present. > > Sent from my Windows Phone > ------------------------------ > From: Dan Haywood > Sent: 12/20/2011 2:50 PM > > To: rest-discuss@yahoogroups.com > Subject: [rest-discuss] Re: Best approach for a complex API? > > > > Just to throw another candidate approach into the mix, there's > restfulobjects.org. Like HAL, this serves up generic media types, though > I think they are rather more fine-grained. It can also deal with pretty > much *any* domain model (certainly the one that you sketch out). > > RO is currently being implemented on ASP.NET MVC (over the Web API), and > will be opensourced in the new year. > > Dan > > > > >
Hi, one may wonder, why so many developers don't "get" REST and do countless RPC- over-HTTP APIs and what could be done to advance REST. I've one idea. First my assumption: People don't get REST, because one of the main ideas is that you need common standard media types for REST. But people don't know about media types. I've done web sites for 6 years but only now, while doing research for my bachelor thesis, I discovered how many great media types are there and could be reused. And there is no advertisement for good media types. Where can I go and browse / search / filter a list of media types and look whether there is already something registered for my need? Regards, Thomas Koch, http://www.koch.ro
you can use IANA´s website [1], but it aint very "handy". I think there should be a more modern website (or service) where you can search, view by category and stuff... [1] http://www.iana.org/assignments/media-types/index.html --- In rest-discuss@yahoogroups.com, Thomas Koch <thomas@...> wrote: > > Hi, > > one may wonder, why so many developers don't "get" REST and do countless RPC- > over-HTTP APIs and what could be done to advance REST. > I've one idea. First my assumption: > People don't get REST, because one of the main ideas is that you need common > standard media types for REST. But people don't know about media types. > I've done web sites for 6 years but only now, while doing research for my > bachelor thesis, I discovered how many great media types are there and could > be reused. > > And there is no advertisement for good media types. Where can I go and browse > / search / filter a list of media types and look whether there is already > something registered for my need? > > Regards, > > Thomas Koch, http://www.koch.ro >
I think one thing that presents a challenge in understanding Fielding's model is that his approach is very data-centric[1]. The importance of media types (data!) is evidence of this vital aspect of implementing distributed systems. Many developers I encounter continue to focus primarily on the components of the system rather than the data passed between them. It is understandable since most education, software tooling, and valuation within the developer community continues to place components above the data these components process. [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/software_arch.htm#sec_1_2_3 mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Wed, Dec 21, 2011 at 08:13, Thomas Koch <thomas@...> wrote: > Hi, > > one may wonder, why so many developers don't "get" REST and do countless RPC- > over-HTTP APIs and what could be done to advance REST. > I've one idea. First my assumption: > People don't get REST, because one of the main ideas is that you need common > standard media types for REST. But people don't know about media types. > I've done web sites for 6 years but only now, while doing research for my > bachelor thesis, I discovered how many great media types are there and could > be reused. > > And there is no advertisement for good media types. Where can I go and browse > / search / filter a list of media types and look whether there is already > something registered for my need? > > Regards, > > Thomas Koch, http://www.koch.ro > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Dec 21, 2011, at 2:13 PM, Thomas Koch wrote: > Hi, > > one may wonder, why so many developers don't "get" REST and do countless RPC- > over-HTTP APIs and what could be done to advance REST. > I've one idea. First my assumption: > People don't get REST, because one of the main ideas is that you need common > standard media types for REST. But people don't know about media types. Yeah - you nailed it. The whole idea that media types are **the only** contract (besides URI, HTTP,..) in RESTful systems has completely, absolutely not made it into the current REST-hype bandwagon. Neither has the fact that designing media types is consequently *the* design activity to engage in when doing RESTful systems design. > I've done web sites for 6 years but only now, while doing research for my > bachelor thesis, I discovered how many great media types are there and could > be reused. Yes, I know that feeling. HTTP & friends are a bit like SQL: You *think* pretty soon that you know what you are doing only to discover years down the line what a complete fool you have been (at least that was my experience a decade ago). > > And there is no advertisement for good media types. Where can I go and browse > / search / filter a list of media types and look whether there is already > something registered for my need? The best ones to look at are the ones around OpenSearch and AtomPub because they give you an idea how to incorporate machine-to-machine interaction aspects into media types. The HTML spec is also pretty worth a look since there are extremely interesting details around image maps (you know, where the browser is told to append x and y params and forms processing. NewsML 2 (I think media type standardization has just begun) has some aspects regarding control of consumer systems (e.g. a news provider can tell a news consumer to withdraw a news item - you could leverage such stuff between systems such as inventory and online shop, for example) HTH, Jan > > Regards, > > Thomas Koch, http://www.koch.ro >
On Wed, Dec 21, 2011 at 6:11 PM, Jan Algermissen <jan.algermissen@...m> wrote: > > On Dec 21, 2011, at 2:13 PM, Thomas Koch wrote: > >> Hi, >> >> one may wonder, why so many developers don't "get" REST and do countless RPC- >> over-HTTP APIs and what could be done to advance REST. >> I've one idea. First my assumption: >> People don't get REST, because one of the main ideas is that you need common >> standard media types for REST. But people don't know about media types. > > Yeah - you nailed it. The whole idea that media types are **the only** contract (besides URI, HTTP,..) in RESTful systems has completely, absolutely not made it into the current REST-hype bandwagon. Neither has the fact that designing media types is consequently *the* design activity to engage in when doing RESTful systems design. > Media types are not the only 'contract'. You're forgetting link relations. Another, arguably better, alternative to simply minting a bespoke media type for every application is to stick to a generic media type which provides the key hypermedia properties you need (usually outward links and embedding resources) and define your application in terms of link relations. This is effectively how HTML apps work just with text/images wrapped in anchor tags, instead of rels. Cheers, Mike
On Wed, Dec 21, 2011 at 3:54 PM, mike amundsen <mamund@...> wrote: > I think one thing that presents a challenge in understanding > Fielding's model is that his approach is very data-centric[1]. The > importance of media types (data!) is evidence of this vital aspect of > implementing distributed systems. > > Many developers I encounter continue to focus primarily on the > components of the system rather than the data passed between them. It > is understandable since most education, software tooling, and > valuation within the developer community continues to place components > above the data these components process. I agree. It also echoes one of the rules stated in the Basics of unix philosophy as stated by Eric S. Raymond: Rule of Representation: Fold knowledge into data, so program logic can be stupid and robust (http://www.faqs.org/docs/artu/ch01s06.html#id2878263). I also think that media types are very central to REST. But one problem today is that, apart from XHTML and Atom, there are few *general* and data-oriented mediatypes with good hyperlink support. There are some interesting efforts that have been mentioned on this mailing list, but it seems many are looking for a JSON-based or JSON-style alternative to XHTML and Atom. Today a colleague of mine and I discussed an approach where JSON could be syntactically be extended with a new value "hyperlink" with support for the basic hyperlink attributes (href, type, method and rel). At the same time, the last thing we need is an overproliferation of overspecified and custom mediatypes. This would introduce tighter coupling and could lead to evolvability problems which is precisely one of the problems REST is trying to adress and solve. Overspecified and custom mediatypes means going in the direction of typed interfaces, which I think is one of *the* major problems with SOAP. The web is not recompiled every morning - yet it works! /Paul -- Paul Cohen www.seibostudios.se mobile: +46 730 787 035 e-mail: paul.cohen@...
<snip> > Today a colleague of mine and I discussed an approach where JSON could > be syntactically be extended with a new value "hyperlink" with support > for the basic hyperlink attributes (href, type, method and rel). </snip> for possible ideas on this , check out my Collection+JSON design[1]. also see the JSON verison of HAL[2]. > At the same time, the last thing we need is an overproliferation of > overspecified and custom mediatypes. This would introduce tighter > coupling and could lead to evolvability problems which is precisely > one of the problems REST is trying to adress and solve. Overspecified > and custom mediatypes means going in the direction of typed > interfaces, which I think is one of *the* major problems with SOAP. > The web is not recompiled every morning - yet it works! </snip> the danger of "overproliferation and/or overspecification" of media type designs has been with us for quite some time, each time a dev spits out custom XML or JSON serializations of internal objects, a "new media type" is born (and i think someone kicks a cat, too). i doubt the danger can get any "greater" than it already is today. however, by engaging in conscious design of a media type - including the work of documenting and registering that design - the "danger" is appreciable diminished. as a result more and better designs can emerge and circulate. i think that will do quite a bit toward promoting the quality and use of media types in HTTP implementations. [1] http://www.amundsen.com/media-types/collection/ [2] http://stateless.co/hal_specification.html mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Wed, Dec 21, 2011 at 14:21, Paul Cohen <paco@...> wrote: > On Wed, Dec 21, 2011 at 3:54 PM, mike amundsen <mamund@...> wrote: >> I think one thing that presents a challenge in understanding >> Fielding's model is that his approach is very data-centric[1]. The >> importance of media types (data!) is evidence of this vital aspect of >> implementing distributed systems. >> >> Many developers I encounter continue to focus primarily on the >> components of the system rather than the data passed between them. It >> is understandable since most education, software tooling, and >> valuation within the developer community continues to place components >> above the data these components process. > > I agree. It also echoes one of the rules stated in the Basics of unix > philosophy as stated by Eric S. Raymond: > > Rule of Representation: Fold knowledge into data, so program logic can > be stupid and robust > (http://www.faqs.org/docs/artu/ch01s06.html#id2878263). > > I also think that media types are very central to REST. But one > problem today is that, apart from XHTML and Atom, there are few > *general* and data-oriented mediatypes with good hyperlink support. > There are some interesting efforts that have been mentioned on this > mailing list, but it seems many are looking for a JSON-based or > JSON-style alternative to XHTML and Atom. > > Today a colleague of mine and I discussed an approach where JSON could > be syntactically be extended with a new value "hyperlink" with support > for the basic hyperlink attributes (href, type, method and rel). > > At the same time, the last thing we need is an overproliferation of > overspecified and custom mediatypes. This would introduce tighter > coupling and could lead to evolvability problems which is precisely > one of the problems REST is trying to adress and solve. Overspecified > and custom mediatypes means going in the direction of typed > interfaces, which I think is one of *the* major problems with SOAP. > The web is not recompiled every morning - yet it works! > > /Paul > > -- > Paul Cohen > www.seibostudios.se > mobile: +46 730 787 035 > e-mail: paul.cohen@...
mike amundsen wrote: > > the danger of "overproliferation and/or overspecification" of media > type designs has been with us for quite some time, each time a dev > spits out custom XML or JSON serializations of internal objects, a > "new media type" is born (and i think someone kicks a cat, too). > Yeah, that would be me... -Eric
Likely more an HTTP question, but: If a client sends an entity to be replaced (PUT), does the server may interfere with the entity and e.g. add hypermedia links or set/complete additional content? He could answer the request with the modified version... i cannot find an answer to that issue in the specs, except: "HTTP/1.1 does not define how a PUT method affects the state of an origin server. "
hello jakob. On Dec 21, 2011, at 12:29, "Jakob Strauch" <jakob.strauch@web.de> wrote: > If a client sends an entity to be replaced (PUT), does the server may interfere with the entity and e.g. add hypermedia links or set/complete additional content? He could answer the request with the modified version... absolutely! any non-trivial server will do that for many server-controlled things such as modification date or other kinds of book-keeping. cheers, dret.
Hello! I just released a RESTful API I have been working on for some time and I would love to get your feedback on it. The main goal for me was to make this API accessible to developers without requiring them to read a lot of documentation before getting started. Specifically, this meant: (a) ensure that all resources can be reached via links (b) no need to learn how to construct URIs (c) explorable with an ordinary web browser (d) HTML representations with built-in documentation (e) ability for developer to see 'link information' on demand (f) all resources and relationships are discoverable You can have a look for yourself if you go here: https://beta.vcider.com Login as user "testuser" with password "testuser". What you see is our UI for the management and control of software defined (virtual) networks. There are a few nodes and networks all setup, so you can see what this looks like in the UI. But since you are there for the API, you can either go to the API documentation: https://beta.vcider.com/api/ Or you can go straight to the API itself: https://beta.vcider.com/api/root/ If you follow the last link (after you have logged in), you can see the HTML representation of the API root resource, which contains links to more information. A few things to try: * Click on 'show as JSON' to see the same in plain JSON. * Click on 'show linkinfo' to see what you can do with the links. * Click on the nodes list and then click on 'show related'. The ability to show related information (the last point) is a concession to the realities of client programming: Let's say the client needs to display an overview table of all the nodes in the system. For that, the client normally gets the list of all nodes (each entry in the list being a URI) and then accesses each URI to get information about the particular resource. This means the client possibly needs to issue lots of requests. When you click on 'show related' (which results in the addition of a query string parameter to the URI), you actually get a 'preview' of the referenced resources right in your list, so you will typically get all the information you need with just a single request. There are also some areas where I know I still have work to do: * Caching and the relevant headers are not handled at this point. * The content type is just plain "application/json". A note about authentication in case you are interested: If you log in via the web site, you are authenticated with the web-site's session (those sessions are part of the underlying MVC anyway). But for programmatic clients, you use a different approach, not based on sessions. You use special API credentials, which an account holder can create (look under 'Settings'). The client then calculates a message signature for each request. You can read more about how this is done here: https://beta.vcider.com/api3/ We also wrote a low-level and high-level client in Python, in case you are interested in seeing an example client implementation. You can find it on GitHub here: https://github.com/vCider/API Anyway, you guys know REST and I would love to hear your feedback on my attempt to design an API, which makes it easy for developers to get started by being 'human readable'. Thank you very much for any feedback... Juergen
hi erik! well, i don´t know, why i asked, because i´m doing it all the time and it makes sense, if i would think about it :) maybe i should go to bed right now ;) Thanks, Jakob --- In rest-discuss@yahoogroups.com, Erik Wilde <dret@...> wrote: > > hello jakob. > > On Dec 21, 2011, at 12:29, "Jakob Strauch" <jakob.strauch@...> wrote: > > If a client sends an entity to be replaced (PUT), does the server may interfere with the entity and e.g. add hypermedia links or set/complete additional content? He could answer the request with the modified version... > > absolutely! any non-trivial server will do that for many server-controlled things such as modification date or other kinds of book-keeping. > > cheers, > > dret. >
That conversation has been had many times. There is a good list of what standard media types are available, through the IETF and the W3C. There is documentation available around. Adding yet another list to that will solve absolutely nothing, as it's been tried and has failed, for many various reasons. That said, if after doing your research you have found the content, I'd argue indeed that the content was quite readily available. Changing people's views on what constitutes good web architecture is a marketing issue, and some members on this list are very well known for spending countless hours writing articles, presenting at conferences and broadcasting the gospell. I think that is where something ought to happen to reach broader adoption, but I have no idea what the avenue ought to be. I'm listening. ________________________________________ From: rest-discuss@yahoogroups.com [rest-discuss@yahoogroups.com] on behalf of Thomas Koch [thomas@...] Sent: 21 December 2011 13:13 To: rest-discuss@yahoogroups.com Subject: [rest-discuss] advertise media types Hi, one may wonder, why so many developers don't "get" REST and do countless RPC- over-HTTP APIs and what could be done to advance REST. I've one idea. First my assumption: People don't get REST, because one of the main ideas is that you need common standard media types for REST. But people don't know about media types. I've done web sites for 6 years but only now, while doing research for my bachelor thesis, I discovered how many great media types are there and could be reused. And there is no advertisement for good media types. Where can I go and browse / search / filter a list of media types and look whether there is already something registered for my need? Regards, Thomas Koch, http://www.koch.ro ------------------------------------ Yahoo! Groups Links
Jakob Strauch wrote: > If a client sends an entity to be replaced (PUT), does the server may > interfere with the entity and e.g. add hypermedia links or set/complete > additional content? He could answer the request with the modified > version... > > i cannot find an answer to that issue in the specs, except: > "HTTP/1.1 does not define how a PUT method affects the state of an > origin server. " There has been some language added in httpbis that addresses this: "An origin server SHOULD verify that the PUT representation is consistent with any constraints which the server has for the target resource that cannot or will not be changed by the PUT. This is particularly important when the origin server uses internal configuration information related to the URI in order to set the values for representation metadata on GET responses. When a PUT representation is inconsistent with the target resource, the origin server SHOULD either make them consistent, by transforming the representation or changing the resource configuration, or respond with an appropriate error message containing sufficient information to explain why the representation is unsuitable. The 409 (Conflict) or 415 (Unsupported Media Type) status codes are suggested, with the latter being specific to constraints on Content-Type values." http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-17#section-6. 6 Robert Brewer fumanchu@...
hello. On 2011-12-21 10:48 , Mike Kelly wrote: > On Wed, Dec 21, 2011 at 6:11 PM, Jan Algermissen <jan.algermissen@...> wrote: >> On Dec 21, 2011, at 2:13 PM, Thomas Koch wrote: >>> People don't get REST, because one of the main ideas is that you need common >>> standard media types for REST. But people don't know about media types. >> Yeah - you nailed it. The whole idea that media types are **the only** contract (besides URI, HTTP,..) in RESTful systems has completely, absolutely not made it into the current REST-hype bandwagon. Neither has the fact that designing media types is consequently *the* design activity to engage in when doing RESTful systems design. > Media types are not the only 'contract'. You're forgetting link relations. have to agree with mike here. representations are something that people at least are dimly aware of. link relationship types and the fact that without them, nothing works at all, is even more obscure than representations. so advertise link types as well! don't bury link relationship types in media types (or tightly couple them to a schema), instead design and document and publish link relationship types. not controversial at all, i just wanted to voice my support for this. cheers, dret.
If "REST is not about domain models" (as Glenn Block states it here: http://tech.groups.yahoo.com/group/rest-discuss/message/18117) how are we then going to solve machine-to-machine scenarious where it is all about specific domains? By not using REST? I know this is beating an old horse, but I still have issues with the idea of ignoring the domain of a system. I do se the benefits of not serializing internal object representations directly. That would expose the inner workings of the system. So adding some kind of public facing representation on top of it makes sense. Then you can freely change the inner workings without worrying about the clients (as long as you keep the transformation to the public representation in sync). What I do not understand is the fear of coupling the clients to the domain. I mean: if I am creating an M2M system working with e-commerce then it must surely understand what a quote, a sales order, a bill, and a payment is? Never mind what format these entities are represented in - if it's HTML than the client must have decoders for RDFa -> Quote/Order/Bill/whatever, and similar for XML, HAL, CSV and whatever other format one might choose. In the end the client must be able to decode from the wire format (public representation) to some kind of internal object representing the domain on the client side, such that it can do actual operations on it. So, as I see it, we cannot avoid some kind of decoding/deserialization on the client, where it will take the public representation and convert it to some internal domain specific representation - thereby coupling the client to the domain. What is it that I am failing to understand here? But maybe I am too single minded ... the systems I work with are all about M2M integrations, where one sub-system is reading data from another sub-system and then making decisions based on that data (meaning they must have some sort of domain understanding). In these cases we need to expose domain data, and the formats for that are usually vendor specifik derivations of XML or JSON. It could of course be possible to use PDF as a public format, but that would make it rather difficult to automatically extract the required domain specific data. Regards, J�rn
On Dec 21, 2011, at 7:48 PM, Mike Kelly wrote: > > Media types are not the only 'contract'. You're forgetting link relations. Yup - I keep lumping them together. Thanks for pointing that out. Jan > > Another, arguably better, alternative to simply minting a bespoke > media type for every application is to stick to a generic media type > which provides the key hypermedia properties you need (usually outward > links and embedding resources) and define your application in terms > of link relations. This is effectively how HTML apps work just with > text/images wrapped in anchor tags, instead of rels. > > Cheers, > Mike
On Dec 22, 2011, at 1:12 AM, Erik Wilde wrote: > hello. > > On 2011-12-21 10:48 , Mike Kelly wrote: >> On Wed, Dec 21, 2011 at 6:11 PM, Jan Algermissen <jan.algermissen@...> wrote: >>> On Dec 21, 2011, at 2:13 PM, Thomas Koch wrote: >>>> People don't get REST, because one of the main ideas is that you need common >>>> standard media types for REST. But people don't know about media types. >>> Yeah - you nailed it. The whole idea that media types are **the only** contract (besides URI, HTTP,..) in RESTful systems has completely, absolutely not made it into the current REST-hype bandwagon. Neither has the fact that designing media types is consequently *the* design activity to engage in when doing RESTful systems design. >> Media types are not the only 'contract'. You're forgetting link relations. > > have to agree with mike here. representations are something that people at least are dimly aware of. link relationship types and the fact that without them, nothing works at all, is even more obscure than representations. so advertise link types as well! don't bury link relationship types in media types (or tightly couple them to a schema), instead design and document and publish link relationship types. +1 on documenting link relations in separate documents. Improves orthogonality. Jan > not controversial at all, i just wanted to voice my support for this. > > cheers, > > dret.
I just finished reading the "advertise media types" thread. And Eric and Mike hits exactly where it hurts :-) See http://tech.groups.yahoo.com/group/rest-discuss/message/18127: Mike> each time a dev > spits out custom XML or JSON serializations of internal objects, a > "new media type" is born (and i think someone kicks a cat, too). > Eric> Yeah, that would be me... Which leaves me with that feeling of "yes, I am not ignorant, I hear what you are saying, but apparently I am blind for the alternatives". There are endless numbers of different domains out there: we can fit it all into HTML for humans to read (the web has proven that already), but I cannot see how we can make clients for M2M interaction without hardwiring domain knowledge into them? As Paul states it "The web is not recompiled every morning - yet it works!". Sure! But we do not recompile it because humans are so good at adopting to whatever they see on a screen. But for a M2M system to work it surely must be recompiled in order to understand new or changed semantics of the domain it works with. Yes, it can be built such that new features are ignored by old clients and that is certainly worth striving for - but new clients are needed for new semantics. /J�rn
On Dec 22, 2011, at 7:50 AM, Jørn Wildt wrote:
> but I cannot see how
> we can make clients for M2M interaction without hardwiring domain knowledge
> into them?
Spot on :-) You must hard-wire domain knowledge into clients or communication won't happen. No matter how generic your media types is, you always end up putting in your code the knowledge about the domain.
What REST changes is who *owns* the contact. In RPC systems, the contract is owned entirely by the server, allowing it to change the contract at will as long as it considers the effect on existing clients.
REST moves the contract away from the server to a 'global ownership' (e.g. IANA)[1] effectively leading two two things:
1. The server cannot change the contract at will; not without engaging into a
form of global communication about the change. Therefore server owners cannot
(by nature of the system) break clients - the contract cannot change all of
a sudden.
2. In order for servers to evolve without changing the global contract every time
the contract (i.e. media types) must allow for a certain amount of change.
This 'allowing for a certain amount of change' leads to greater client-side
development efforts - clients must be programmed to cope with the possible
changes enabled by the media types the clients implements.
[1] 'Global' need not be world-wide. It can very well just mean 'global to your
enterprise' (for integration scenarios) - what matters is that no individual
servers own the contract but that in independent (for any definition of independent)
party is responsible.
>
> As Paul states it "The web is not recompiled every morning - yet it works!".
> Sure! But we do not recompile it because humans are so good at adopting to
> whatever they see on a screen. But for a M2M system to work it surely must
> be recompiled in order to understand new or changed semantics of the domain
> it works with.
No, this is not true thanks to content negotiation and well designed
extension rules for payload formats.
The cases where a server needs to switch from application/procurement-v1 to
application/procurement-v2 instantly, without a transition period of supporting
those application/procurement-v1 enabled clients is extremely rare. (Feel free to
construct an example).
The cool thing about REST is that any form of evolution can take place without
coordinating the change between clients, servers and intermediaries. It can even
happen without terminating *current* business transactions between clients and
servers. E.g. I can buy a book at Amazon at the same time they change their
system - or when was the last time you had to stop a purchase because of an
ongoing update?
This is so natural to use Web users and I continue to find it astonishing that
business systems owners are not jumping like mad on that feature regarding their
internal IT systems.
Some start to get it, often in combination with agility goals but the majority is
still lightyears 'behind'.
> Yes, it can be built such that new features are ignored by
> old clients and that is certainly worth striving for - but new clients are
> needed for new semantics.
Yes, sure. But that is a property of networked systems regardless of architectural
style.
Jan
>
> /Jørn
>
>
> What REST changes is who *owns* the contact. > [1] 'Global' need not be world-wide. It can very well just mean 'global to your enterprise' Okay. That (and the rest you wrote) makes good sense. But I still feel sorry for the cats that get kicked when someone mints a new media type. And, just to make it clear, by this: > > Yes, it can be built such that new features are ignored by > > old clients I was referring to the idea of conneg and versioning you mention here: > No, this is not true thanks to content negotiation and well designed > extension rules for payload formats. Unfortunately I wasn't clear about it. /Jørn
On Dec 22, 2011, at 9:05 AM, Jorn Wildt wrote: > > What REST changes is who *owns* the contact. > > > [1] 'Global' need not be world-wide. It can very well just mean 'global to your enterprise' > > Okay. That (and the rest you wrote) makes good sense. But I still feel sorry for the cats that get kicked when someone mints a new media type. Why? In an intra-enterprise context, the activity is not much different from designing some service's XML[1]. You only need to take that extra mile and make it global (and media type-ish instead of POX-ish of course). And sure, in m2m contexts, you have to take care not to suffer media type bloat. OTH, the latter will be much better than XML schema and RPC-API bloat. REST-based thinking helps a lot to look outside the fence of your own current server implementation task. I'd prefer a hundred media types over a hundred SOAP APIs every time...besides, it is much more fun to design the former :-) (You won't be able to do e.g. enterprise integration with HTML semantics only) Jan [1] I found that usually, some perceived service (and percived 'canonical' application) is the starting point for designing a media type. > > And, just to make it clear, by this: > > > > Yes, it can be built such that new features are ignored by > > > old clients > > I was referring to the idea of conneg and versioning you mention here: > > > No, this is not true thanks to content negotiation and well designed > > extension rules for payload formats. > > Unfortunately I wasn't clear about it. > > /Jørn > >
> > Okay. That (and the rest you wrote) makes good sense. But I still feel sorry for the cats that get kicked when someone mints a new media type. > > Why? Not because I am going to kick them, I believe myself to be completely aligned with your opinion. It's just that Mike and Eric's comment (http://tech.groups.yahoo.com/group/rest-discuss/message/18127), about kicking a cat every time a new media type is minted, is contrary to your view on media types. They may although be referring to different scenarios? > And sure, in m2m contexts, you have to take care not to suffer media type bloat. OTH, the latter will be much better than XML schema and RPC-API bloat. Yes, having a (enterprise) central definition of a quote representation must be way better than ten different departments having each their own SOAP/schema for that. Thanks, Jørn
On Dec 22, 2011, at 7:50 AM, Jørn Wildt wrote: > There are > endless numbers of different domains out there One thing I forgot to say: There is a difference between protocol and service functionality and a client needs to consider two things: Why it chooses a specific service and how to interact with it. The service type is orthogonal to the question of what API it exposes (there might even be several APIs in serveral arch. styles on a single service). This is an aspect that is in my experience not well understood nor talked about. Certainly the SOA(P) world completely ignores this distinction, often implicitly substituting service API signature with service type. Many of the domain semantics you mention can be part of the service types defined in that domain. Client (of course) need to understand these service types to make the very decision *which* service to talk to to achieve a certain goal. This decision includes quite some amount of expectations about how the service will behave (sometimes referred to as 'intent'). Media types only need to capture the protocol semantics that enable communication they need not (and must not) define service functionality. This is why you do not need an <order-placement-request> XML schema (or a placeOrder() method for that matter) - it is sufficient, for example, to base the actual communication on a domain-independent media type spec such as AtomPub and POST <order>s to an AtomPub server *that whoever configures the application knows to be an order-processing-service*. You can see this in code, if you consider that OrderProcessorService ops; Order order; ... ops.placeOrder(order); unnecessarily duplicates the knowledge how ops behaves into the RPC API ( => placeOrder(Order order)). OrderProcessorService ops; Order order; ... ops.post(order); does exactly the same thing but this time through a uniform API. ----- If you point your browser to Amazon to buy a book, you pick Amazon's service because you expect to be able to order books there. Your HTTP/HTML based browser is just an agent (sic!) realizing (together with the server component) your intended use case. If Amazon stopped selling books and turned into a whether-info service instead, no amount of RPC-style API-level tight coupling would fix that and magically enforce your use case. Jan
On Thu, Dec 22, 2011 at 8:45 AM, Jan Algermissen
<jan.algermissen@...m> wrote:
> On Dec 22, 2011, at 7:50 AM, Jørn Wildt wrote:
>> but I cannot see how
>> we can make clients for M2M interaction without hardwiring domain knowledge
>> into them?
We can't.
> Spot on :-) You must hard-wire domain knowledge into clients or communication won't happen. No matter how generic your media types is, you always end up putting in your code the knowledge about the domain.
We have to do some work! ;-)
>> As Paul states it "The web is not recompiled every morning - yet it works!".
>> Sure! But we do not recompile it because humans are so good at adopting to
>> whatever they see on a screen. But for a M2M system to work it surely must
>> be recompiled in order to understand new or changed semantics of the domain
>> it works with.
>
> No, this is not true thanks to content negotiation and well designed
> extension rules for payload formats.
Exactamente! Content negotiation is another underestimated and often
overlooked feature in REST/HTTP.
The robustness principle or Postel's law
(http://en.wikipedia.org/wiki/Postels_law) is fundamental to designing
and implementing good clients and servers:
Be liberal in what you accept, and conservative in what you send.
So assume as little as you can about the server in your design and
implementation of a client. And always assume the server will change.
Look on server data as duck-typed
(http://en.wikipedia.org/wiki/Duck_typing). If the data looks like
useful domain data you can assume it is.
We are talking about cross-enterprise system interfaces here. We can
not expect all enterprises to march in order with regards to system
releases and upgrades!
> The cases where a server needs to switch from application/procurement-v1 to
> application/procurement-v2 instantly, without a transition period of supporting
> those application/procurement-v1 enabled clients is extremely rare. (Feel free to
> construct an example).
>
> The cool thing about REST is that any form of evolution can take place without
> coordinating the change between clients, servers and intermediaries.
Not only cool. But to me this is the *core* business rationale for
chosing REST over RPC-style API:s for cross-enterprise system
interfaces.
> This is so natural to use Web users and I continue to find it astonishing that
> business systems owners are not jumping like mad on that feature regarding their
> internal IT systems.
I agree.
/Paul
--
Paul Cohen
www.seibostudios.se
mobile: +46 730 787 035
e-mail: paul.cohen@...
On Dec 22, 2011, at 9:40 AM, Jorn Wildt wrote:
> > > Okay. That (and the rest you wrote) makes good sense. But I still feel sorry for the cats that get kicked when someone mints a new media type.
> >
> > Why?
>
> Not because I am going to kick them, I believe myself to be completely aligned with your opinion.
Understood - I did mean to add my 2 cents to the discussion, not to 'correct' or argue with you. Sorry :-)
> It's just that Mike and Eric's comment (http://tech.groups.yahoo.com/group/rest-discuss/message/18127), about kicking a cat every time a new media type is minted, is contrary to your view on media types. They may although be referring to different scenarios?
Dunno. I *think* that Mike's and my thinking is rather aligned and that Eric is a tiny bit more in favor of doing stuff with link rels than suitable for my personal taste. (Just my impression over the years)
I usually favor real ('big') media types over what I would consider link relation bloat :-) because it can help to capture protocol semantics in a single document. I also like it when clients constitute an implementation of 'one big' thing as opposed to being a generic machine with link-rel plugins lumped into them.
However, design decisions depend on the actual case and I have done both with satisfaction. What it comes down to is, I guess, the idea that not every service should lead to a new media type. You should investigate your anticipated canonical use cases (the ones that inspire and drive your media type design in the first place) to find similar protocol aspects and then stuff those into 1-N media types - with N being not too large (hah! great advice :-)
Jan
>
> > And sure, in m2m contexts, you have to take care not to suffer media type bloat. OTH, the latter will be much better than XML schema and RPC-API bloat.
>
> Yes, having a (enterprise) central definition of a quote representation must be way better than ten different departments having each their own SOAP/schema for that.
>
> Thanks, Jørn
>
>
> This is why you do not need an <order-placement-request> XML schema (or a placeOrder() method for that matter) - it is sufficient, for example, to base the actual communication on a domain-independent media type spec such as AtomPub and POST <order>s to an AtomPub server *that whoever configures the application knows to be an order-processing-service*. I am not sure I understand this part. What does "an <order-placement-request> XML schema" mean to you? I completely agree on not needing a placeOrder() method - doing a POST <order> to an appropriate collection is enough. But I don't see why using AtomPub makes any change? I still have to POST a domain specific media type <order>? So having a generic media type like AtomPub doesn't remove the need for domain specific stuff? /Jørn
Â
Jorn Wildt <jw@...> hat am 22. Dezember 2011 um 10:44 geschrieben:
> > This is why you do not need an <order-placement-request> XML schema (or a
> > placeOrder() method for that matter) - it is sufficient, for example, to
> > base the actual communication on a domain-independent media type spec such
> > as AtomPub and POST <order>s to an AtomPub server *that whoever configures
> > the application knows to be an order-processing-service*.
>
> I am not sure I understand this part. What does "an <order-placement-request>
> XML schema" mean to you?Â
I meant, that the specific intent ("I send you this and request specifically
that you process it as an order") need not be part of the hypermedia. It is
already part of the act of picking *that* service.
Â
>
> I completely agree on not needing a placeOrder() method - doing a POST <order>
> to an appropriate collection is enough. But I don't see why using AtomPub
> makes any change? I still have to POST a domain specific media type <order>?
> So having a generic media type like AtomPub doesn't remove the need for domain
> specific stuff?Â
Yes, it does not remove the need to speak the same language. If you need to
transfer information, the receiver needs to be able to understand the structure
of the data. here '<order .../>'. What I tried to point out is that the order is
orthogonal to the interaction intent. <order> will work in many other places,
too. It need not be owned by the service.
Â
Â
The specific coupling happens around the intent - and will regardless of
protocol style. Nothing in an RPC API call such as getPendingOrders() guarantees
that the service will actually put all the pending orders in the reply. That is
the client's intent and it is 'backed up' by the choice of a specific service
rather than a guarantee of the API spec.
Â
Jan
Â
Â
Â
>
> /Jørn
>> I meant, that the specific intent ("I send you this and request specifically
> that you process it as an order") need not be part of the hypermedia. It is
> already part of the act of picking *that* service.
Ah! Thanks. Yes, by picking a specific URL (service) there is no need for further telling of the intention. The location/URL/service ~/orders plus the operation POST <order> becomes the operation "add-to-orders".
Thanks for clarifying.
> <order> will work in many other places
Yes, that's a nice feature.
BTW: based on this discussion (among other things) we decided to go with a REST API on the current project :-) Thanks a lot for the input.
/Jørn
Hi, say I've a server-site addressbook collection (/contacts) referencing individual contacts. I can mirror this collection to my laptop. Later I want to come back and synchronize changes from the server to my laptop. What would be a restful way of doing this? A complicate case is to signal the client that a contact has been deleted. My idea is to have a log resource for every collection that lists all changes that have happened to the resource and can be filtered with a "since" timestamp. This log resource would then be the only place that still has knowledge about deleted resources. Would an Atom feed be an appropriate media type for such a log? Is there a standard way to link from a collection to a feed containing the changes to this collection? Regards, Thomas Koch, http://www.koch.ro
hello thomas. > My idea is to have a log resource for every collection that lists all changes > that have happened to the resource and can be filtered with a "since" > timestamp. > This log resource would then be the only place that still has knowledge about > deleted resources. > Would an Atom feed be an appropriate media type for such a log? Is there a > standard way to link from a collection to a feed containing the changes to > this collection? this is a very common pattern and basically is a simplified case of monitoring a server-side process' state, only that the state in this case is a pretty static address, and not some business process that might go through more complex state changes advertised in the associated feed. anyway, yes, it's a good and RESTful way to do this. but as you noted, you'll need a way to signal DELETE changes to a resource in the associated feed. this is not possible in base atom. http://tools.ietf.org/html/draft-snell-atompub-tombstones has been in the pipeline for a long time and hopefully it will soon be completed, since it is essential for pretty much any scenario like yours. afaik, it already has been used in some implementations, and the idea is fairly simple, so i wouldn't be too worried about using it already. does anybody know what's holding up the tombstones? if there's anything i can do to push this effort, i am willing to do it. we will make heavy use of tombstones as well, and i'd be interested to refer to an RFC instead of a draft rather sooner than later. cheers, dret.
Jan Algermissen <jan.algermissen@...> wrote: > > > It's just that Mike and Eric's comment > > (http://tech.groups.yahoo.com/group/rest-discuss/message/18127), > > about kicking a cat every time a new media type is minted, is > > contrary to your view on media types. They may although be > > referring to different scenarios? > > Dunno. I *think* that Mike's and my thinking is rather aligned and > that Eric is a tiny bit more in favor of doing stuff with link rels > than suitable for my personal taste. (Just my impression over the > years) > Don't know how you got that impression; every time a new link relation is minted to express semantics already standardized in generic media types, a cat gets its tail stepped on. I've pushed back against "magic" link relations plenty. Don't worry about my cat, he's 19 and quite used to all the abuse... A proper nutshell summary of me, is that Atom / HTML+RDFa provide sufficient semantics for most any generic object interface. I have yet to create a media type, and object strongly to the notion that REST developers should mint new media types for every project. When Roy says, "A REST API should spend almost all of its descriptive effort in defining the media type(s) used for representing resources and driving application state" I interpret that to mean something like, "HTML <object> syntax is used to communicate URI templates" suffices as documentation. Not, "a new media type has been created to drive application state using URI templates." -Eric
hello eric. > A proper nutshell summary of me, is that Atom / HTML+RDFa provide > sufficient semantics for most any generic object interface. I have yet > to create a media type, and object strongly to the notion that REST > developers should mint new media types for every project. if you use RDFa, you introduce another vocabulary through the back door. that's fine with me, i just want to point out that this is no different from using XML and application/xml as media type and then depend on applications understanding the schema being used by picking up hints such as DOCTYPE or namespaces or xsi:schemaLocation. if you have specific application domain concepts that must be understood by clients to engage in meaningful interaction patterns, then there are only variations of how to push this complexity somewhere. the magic of HTML is that the interaction semantics are application-agnostic, and the same is true for Atom, but for any application that cannot depend on an intelligent person driving the client through every single interaction, you need more semantics to make things work. personally (just like you, i think), i am in favor of using fewer media types and having vocabularies in there which must be understood by clients to continue their path, but i do have certain sympathy for the argument that exposing this vocabulary in a visible way on the protocol level has some advantages, too. cheers, dret,
On Thu, Dec 22, 2011 at 8:11 PM, Eric J. Bowman <eric@...> wrote: > A proper nutshell summary of me, is that Atom / HTML+RDFa provide > sufficient semantics for most any generic object interface. I have yet > to create a media type, and object strongly to the notion that REST > developers should mint new media types for every project. The problem with both Atom and RDF(a) is that they have significant baggage and aren't particularly simple. I think at large part of the reason the generic media type approach had not been adopted more widely is due to this. Hopefully HAL will change this trend since it's designed with REST and simplicity in mind - we've had good feedback and adoption seems to be growing. I'm surprised I haven't had more input from folks such as yourself Eric as I think your input and support would be valuable at this relatively early stage. Cheers, Mike
"Jorn Wildt" wrote: > > I completely agree on not needing a placeOrder() method - doing a > POST <order> to an appropriate collection is enough. But I don't see > why using AtomPub makes any change? I still have to POST a domain > specific media type <order>? So having a generic media type like > AtomPub doesn't remove the need for domain specific stuff? > REST is not about distributed objects, it's about distributed object interfaces. Domain-specific vocabulary isn't exposed at the protocol layer, it's embedded within generic-media-type representations. Think of anything you've ever ordered online. The interface is HTML, not some domain-specific media type. Yet somehow this works. What you POST is really just some name-value pairs. HTML has loads of semantics for expressing name-value pairs. What HTML lacks is a way of expressing the meaning of a name. Which is where RDFa comes in. The GoodRelations ontology allows decoupled clients to determine that a name refers to a part #, and another name refers to the unit price, allowing an agent to comparison-shop between multiple services whose markup varies widely. -Eric
Eric, we have had this discussion before: > What HTML lacks is a way of expressing the meaning of a name. Which is where RDFa comes in. And I cannot disagree with you. Yes, HTML + RDFa + POST <application/x-www-form-urlencoded> can solve just about anything you throw at it. > REST is not about distributed objects, it's about distributed object interfaces. Domain-specific vocabulary isn't exposed at the protocol layer, it's embedded within generic-media-type representations. So application/xml could be just as good as text/html since it doesn't expose the domain vocabulary at the protocol layer? Let me answer that question myself: yes, but there is something missing - in HTML you get hypermedia semantics (links, forms) for free whereas in XML you have to reinvent it for every vocabulary. So the dilema is: By leveraging HTML you get full hypermedia semantics but have to embed domain data in it using tricks like RDFa. On the other hand, by using plain XML you have a straight forward way to encode domain data but have no predefined hypermedia semantics. Right? /J�rn ----- Original Message ----- From: Eric J. Bowman To: Jorn Wildt Cc: rest-discuss@yahoogroups.com Sent: Thursday, December 22, 2011 10:23 PM Subject: Re: [rest-discuss] Re: REST is not about domain models "Jorn Wildt" wrote: > > I completely agree on not needing a placeOrder() method - doing a > POST <order> to an appropriate collection is enough. But I don't see > why using AtomPub makes any change? I still have to POST a domain > specific media type <order>? So having a generic media type like > AtomPub doesn't remove the need for domain specific stuff? > REST is not about distributed objects, it's about distributed object interfaces. Domain-specific vocabulary isn't exposed at the protocol layer, it's embedded within generic-media-type representations. Think of anything you've ever ordered online. The interface is HTML, not some domain-specific media type. Yet somehow this works. What you POST is really just some name-value pairs. HTML has loads of semantics for expressing name-value pairs. What HTML lacks is a way of expressing the meaning of a name. Which is where RDFa comes in. The GoodRelations ontology allows decoupled clients to determine that a name refers to a part #, and another name refers to the unit price, allowing an agent to comparison-shop between multiple services whose markup varies widely. -Eric
Jørn Wildt wrote: > > So application/xml could be just as good as text/html since it > doesn't expose the domain vocabulary at the protocol layer? Let me > answer that question myself: yes, but there is something missing - in > HTML you get hypermedia semantics (links, forms) for free whereas in > XML you have to reinvent it for every vocabulary. > Well, yeah, there's that. > > So the dilema is: By leveraging HTML you get full hypermedia > semantics but have to embed domain data in it using tricks like RDFa. > On the other hand, by using plain XML you have a straight forward way > to encode domain data but have no predefined hypermedia semantics. > Right? > Me in a nutshell, is if an m2m process is dealing with data which amounts to a purchase order, why not make its interface *look* like a purchase order? The extra overhead is rendered moot in REST if you're using compression, caching, and a progressively-renderable media type. Don't un-solve the problem of network latency when transferring data from point A to point B for the sake of saving a few bytes, it's a false economy with cumulative consequences over the long term. By my definition, hypertext semantics doesn't just apply to ‌linking and forms, but to semantic structure. IOW, hypertext includes not just making an abbreviation a link to a definition, but <abbr title= 'etcetera'>etc.</abbr> as well. More importantly, various list markup for describing lists of whatever, plus table markup for describing tabular data; and the ability to nest these within one another, allows the structure of the object interface to convey the relationship between linked/embedded objects even if those objects have complex structures. What I'm saying is, the structure of a purchase-order object is tabular, so its interface may be modeled as a <table>. This <table> can be manipulated via XForms and PUT in whole, or the <table> can contain a <form> or two to POST partial data. Following the accessibility guidelines makes such documents both human and machine readable. In the m2m case, there's more work to do. If this work builds on human- readable code, there's no need to then go back and make an m2m format human-readable and accessible so it can be understood by those charged with maintaining or interoperating with it. Or worry about making it stream processable. I can infer the meaning of a purchase order presented as a table easily enough, column B x column C = column D while the sum of column D is the total cost. Describing this to a machine user is the challenge, but there's an ontology for that -- which applies to application/xml too. The difference is that the same data structure presented as semantic HTML is readily apparent to anyone who needs to understand that data structure (for interoperability or maintenance), plus its progressive rendering rules are well-known so it's stream processable. Not to mention browser-based test/debug. I don't have to learn some new, undocumented media type by reverse- engineering a system's functionality. In 20 years, if I'm reverse- engineering a system written in obscured code in a dead language, the job is much easier if I can understand the data format just by looking at it. HTML is a great language not just for presenting a purchase- order interface, but also for inlining the documentation for that interface within the data-exchange format. Interoperability is easier if external organizations don't even need to know about your internal objects; they're free to come up with their own provided they comply with your API. You get to change your objects that way, too. None of this is possible when data formats are object serializations rather than object interfaces. Hypertext also includes vector-based images, or in HTML the ability to embed an image, where the domain-specific definition of the image has a specific meaning (using a GIF as an array). Anyway, my point is that there's an awful lot of meaning which can be conveyed by the structure of the markup, instead of by overloading link relations or any RDF implementation (embedded/RDFa or linked, or microdata). Such meaning is standardized, how you're using it is what you document about your API. You can usually do this in-line, for human and machine users. HTML wrapped with Atom may be used to define most REST APIs I can think of, and if I go that route, many of the benefits of REST immediately accrue. Ubiquitous media types provide stream-processable, massively- cacheable, widely-interoperable, long-term-stable data modeling for object interfaces. Why _wouldn't_ I want all that, from the start, off-the-shelf, re-usable from one project to the next -- absolutely free? Creating new media types seems like work, I prefer to REST... -Eric
Eric, you certainly make a good case for HTML. The only problem I have with it is the development overhead needed for reading out semantic data in M2M scenarios. Why? Because there is no standard generic way of embedding machine readable data in HTML. Yes, it is possible to use tables and other markup elements as you describe. But you will have to handcraft the decoding on the client side for each and every resource type. > I don't have to learn some new, undocumented media type by reverse- engineering a system's functionality. This is where I do not agree. Yes, the media type is well known, but the embedding format is not. Now you need to reverse engineere how the domain data is embedded. I have the same problem with micro formats: they work nicely with HTML, but you have to write specialized encoders/decoders for each and every format out there. Compare this to a generic serialization format like XML or JSON where encoding and decoding comes for free for just about all object formats. RDFa could be a solution to this, but its triplet semantics requires some thinking out of the box for object encoding. And, yes, I am a big fan of tools that helps consumers of my API getting started without too much overhead. For instance by supplying an XML schema they can use to auto-generate code - the .NET XML serializer has no problem ignoring unknown elements so the server can evolve without breaking the client. Now, if it was possible to find a generic and standardized way of embedding machine readable data in HTML such that you could have generic serializers as if it was XML, then I would love to use it! /Jørn
Hi folks. I hope you had a nice christmas with you friends and families - and are ready for yet a season of Rest-Discus :-) Sorry for bringing up an old issue; I have trawled through earlier discussions but haven't found any conclusive posts that solves the problem. Assume we have a resource with a sales order. Such a sales order can be represented for an end-user as either HTML or a spreadsheet. With content negotiation thats easily done with accept headers for text/html or application/vnd.ms-excel (or similar). But browsers don't let the end-user switch accept headers, so instead we define HTML as the default type, and add a link to where the spreadsheet can be downloaded. Now we add M2M support and decide to represent the sales order in HAL or maybe even in HTML using RDFa or some microformat. But we must support two different ways of embedding domain specific data - either the (fictional) Scandinavian Sales Order Vocabulary or the (also fictional) US Sales Order Vocabulary. How should we approach that problem? First of all, we cannot do content negotiation and switch on the media type because 1) we don't want to mint new domain specific media types, and 2) because we want to use the same media type but with different vocabularies/schemas. Second, the prerequisites might even be wrong: is supporting multiple vocabularies the right thing to do? Lets assume it is right and continue on (but you are most welcome to question this!). I can see two different approaches: 1) Switch on another HTTP header (but which one? accept-language? a custom one?) 2) Use different URIs for different vocabularies (as with the browser solution). 3) Include both vocabularies in the response. If different URIs are used then we need to be able to discover them. This could be done with a link header/HTML anchors + link relation types. The downside is (as always with different URIs) that we don't have one single URL that represents the sales order - unless we accept the overhead of always doing a two-step fetch, and store the base URI, fetch that, look for link-rels and fetch the required representation. Personally I would rather avoid the two-step fetch and do vocabulary negotiation using headers - but, on the other hand, this is perfectly acceptable in a browser-based scenario, so why not also use it in M2M scenarios (caching of the first step would even reduce the problem). Somehow it seems like HTTP is missing a way of doing sub-content negotiation, where the request defines the accepted media type, and together with this also defines the acceptable sub-content-format/vocabulary. According to RFC 2616 (HTTP 1.1) the accept header can take a rather generic extension ";token = token" (for instance ";level=1"), but the standard fails to explain what it is supposed to be used for. Can it be used for exactly the scenario above? Comments? Thanks, Jørn
I had this queued up as a draft, posted when I saw your other thread. Jørn Wildt wrote: > > Eric, you certainly make a good case for HTML. The only problem I > have with it is the development overhead needed for reading out > semantic data in M2M scenarios. Why? Because there is no standard > generic way of embedding machine readable data in HTML. > Well, yes, RDFa and microdata. > > Yes, it is possible to use tables and other markup elements as you > describe. But you will have to handcraft the decoding on the client > side for each and every resource type. > Yes, with microformats, but the problem of needing a parser for every vocabulary is solved by RDFa, so I don't understand your reservations about it. > > > I don't have to learn some new, undocumented media type by reverse- > engineering a system's functionality. > > This is where I do not agree. Yes, the media type is well known, but > the embedding format is not. Now you need to reverse engineere how > the domain data is embedded. > > I have the same problem with micro formats: they work nicely with > HTML, but you have to write specialized encoders/decoders for each > and every format out there. > An RDFa parser can easily generate name-value pairs. How those are interpreted by your system is up to you. > > Compare this to a generic serialization format like XML or JSON where > encoding and decoding comes for free for just about all object > formats. > I don't want to. I don't consume RDFa as RDF, I read it with XPath. I even use RDFa attributes in my CSS to make it more maintainable, vs. making up @class / @id values for each project. What I'm trying to accomplish is making my hypertext accessible to others, including those who wish to consume it as RDF, even though that isn't how _I_ build systems. General-purpose language for exchanging object interfaces, is all I'm looking for. > > RDFa could be a solution to this, but its triplet semantics requires > some thinking out of the box for object encoding. > Which is why I keep saying we're not encoding objects in hypertext, we're using hypertext to provide a generic object interface. All RDFa is to me, is a way to annotate this interface; then I struggle with making it sensible to RDFa parsers and let anyone who wants to consume it worry about relating whatever my triplet semantics happen to be, back to the problem at hand. It isn't my starting point. -Eric
Hi J�rn, On 12/27/2011 10:40 AM, Jorn Wildt wrote: [skip] > Now we add M2M support and decide to represent the sales order in HAL or > maybe even in HTML using RDFa or some microformat. But we must support > two different ways of embedding domain specific data - either the > (fictional) Scandinavian Sales Order Vocabulary or the (also fictional) > US Sales Order Vocabulary. How should we approach that problem? > You can do this by utilising RDF(a) and provide knowledge representations that utilise both vocabularies at once (sidy by side), i.e., the result would be a kind of duplicated knowledge representations. Or you can provide a mapping between both vocabularies, deliver your information by only utilising one vocabulary, and provide further information that make the machine client aware of the vocabulary mapping (a rather generic property for this is rdfs:seeAlso). > First of all, we cannot do content negotiation and switch on the media > type because 1) we don't want to mint new domain specific media types, > and 2) because we want to use the same media type but with different > vocabularies/schemas. I wouldn't recommend to switch the media type, but to utilise a rather generic media type instead. > > Second, the prerequisites might even be wrong: is supporting multiple > vocabularies the right thing to do? Lets assume it is right and continue > on (but you are most welcome to question this!). I guess, this depends on the intended clients. They have to know how to process retrieved information (incl. their vocabularies). In a kind of ideal open world with "shared meaning" and "shared understanding", maybe one vocabulary would be enough, e.g., the rather generic GoodRelations Vocabulary for e-commerce. Cheers, Bo PS: I do not know the content of your mentioned vocabularies. However, by reading their names it rather looks like a localisation problem (or maybe a wrong vocabulary design; e.g. do not design to close).
On Dec 27, 2011, at 10:40 AM, Jorn Wildt wrote: > Assume we have a resource with a sales order. Such a sales order can be represented for an end-user as either HTML or a spreadsheet. With content negotiation thats easily done with accept headers for text/html or application/vnd.ms-excel (or similar). But browsers don't let the end-user switch accept headers, so instead we define HTML as the default type, and add a link to where the spreadsheet can be downloaded. > > Now we add M2M support and decide to represent the sales order in HAL or maybe even in HTML using RDFa or some microformat. But we must support two different ways of embedding domain specific data - either the (fictional) Scandinavian Sales Order Vocabulary or the (also fictional) US Sales Order Vocabulary. How should we approach that problem? Why don't you leverage an existing format, UBL in this case? UBL has no associated media type (yet) but that should not stop you from going ahead an minting a name for your purposes. > > First of all, we cannot do content negotiation and switch on the media type because 1) we don't want to mint new domain specific media types, I'd suggest you do. It has no significant cost and allows you to work within the Web's architecture with easy. Why create fixes for problems that do not exist? (Smells enterprisey :-) > and 2) because we want to use the same media type but with different vocabularies/schemas. Huh? > > Second, the prerequisites might even be wrong: is supporting multiple vocabularies the right thing to do? Lets assume it is right and continue on (but you are most welcome to question this!). My personal advice: Embrace the Web, do simple, straight forward stuff and defer fixing problems until the exist (e.g. why do you consider media type bloat a problem?) Applying REST to enterprise IT problems does not mean tweaking REST, it means changing your view of the problem. > > I can see two different approaches: > > 1) Switch on another HTTP header (but which one? accept-language? a custom one?) No > > 2) Use different URIs for different vocabularies (as with the browser solution). That is always good design. Variants should be given their own URIs. Explore Content-Location header aspects for this. > > 3) Include both vocabularies in the response. No > > If different URIs are used then we need to be able to discover them. This could be done with a link header/HTML anchors + link relation types. Check the Alternates header > The downside is (as always with different URIs) that we don't have one single URL that represents the sales order - unless we accept the overhead of always doing a two-step fetch, and store the base URI, fetch that, look for link-rels and fetch the required representation. No problems here - see all hints above. Conneg already handles these things well. (Digest the all the specs before you optimize ;-) > > Personally I would rather avoid the two-step fetch and do vocabulary negotiation using headers - but, on the other hand, this is perfectly acceptable in a browser-based scenario, so why not also use it in M2M scenarios (caching of the first step would even reduce the problem). See above; HTTP & friends already give you all you need. > > Somehow it seems like HTTP is missing a way of doing sub-content negotiation, where the request defines the accepted media type, and together with this also defines the acceptable sub-content-format/vocabulary. There is AFAIK no notion of 'sub-content-format/vocabulary'. If you must insist on sth like this, check specs like Accept-Features header or the various Profile-mechanisms. (Sorry, no time to dig up the links). > > According to RFC 2616 (HTTP 1.1) the accept header can take a rather generic extension ";token = token" (for instance ";level=1"), but the standard fails to explain what it is supposed to be used for. Can it be used for exactly the scenario above? > > Comments? > Think hard whether you actually have the problems you think you have. Meanwhile, give the Enterpriseys in your team something else to chew on :-) Jan > Thanks, Jørn > >
> Why don't you leverage an existing format, UBL in this case? That's cheating and avoiding the question :-) The assumption is that we actually have two different formats to support. According to Wikipedia that is not even so unrealistic - apparently there are regional variations of it (http://en.wikipedia.org/wiki/Universal_Business_Language). > > First of all, we cannot do content negotiation and switch on the media type because 1) we don't want to mint new domain specific media types, > > I'd suggest you do. It has no significant cost and allows you to work within the Web's architecture with easy. It should be obvious, from the previous discussions in this group, that different schools exists regarding this issue. Right now I am trying to make up my own mind whether I belong to one or the other school. This question is for the "do not mint new media types"-school. > (e.g. why do you consider media type bloat a problem?) I haven't yet decided whether it is a problem or not :-) Right now I am exploring the consequences of the different approaches. One thing that bothers me though, is that media types are considered protocol layer information. For me that means putting domain specific knowledge into it is like asking the TCP/IP stack to be aware of Cats, Dogs, Sales orders and what ever else we come up with. Thanks for the pointers to other headers! /Jørn --- In rest-discuss@yahoogroups.com, Jan Algermissen <jan.algermissen@...> wrote: > > > On Dec 27, 2011, at 10:40 AM, Jorn Wildt wrote: > > > Assume we have a resource with a sales order. Such a sales order can be represented for an end-user as either HTML or a spreadsheet. With content negotiation thats easily done with accept headers for text/html or application/vnd.ms-excel (or similar). But browsers don't let the end-user switch accept headers, so instead we define HTML as the default type, and add a link to where the spreadsheet can be downloaded. > > > > Now we add M2M support and decide to represent the sales order in HAL or maybe even in HTML using RDFa or some microformat. But we must support two different ways of embedding domain specific data - either the (fictional) Scandinavian Sales Order Vocabulary or the (also fictional) US Sales Order Vocabulary. How should we approach that problem? > > Why don't you leverage an existing format, UBL in this case? UBL has no associated media type (yet) but that should not stop you from going ahead an minting a name for your purposes. > > > > > First of all, we cannot do content negotiation and switch on the media type because 1) we don't want to mint new domain specific media types, > > I'd suggest you do. It has no significant cost and allows you to work within the Web's architecture with easy. Why create fixes for problems that do not exist? (Smells enterprisey :-) > > > and 2) because we want to use the same media type but with different vocabularies/schemas. > > Huh? > > > > > Second, the prerequisites might even be wrong: is supporting multiple vocabularies the right thing to do? Lets assume it is right and continue on (but you are most welcome to question this!). > > My personal advice: Embrace the Web, do simple, straight forward stuff and defer fixing problems until the exist (e.g. why do you consider media type bloat a problem?) > > Applying REST to enterprise IT problems does not mean tweaking REST, it means changing your view of the problem. > > > > > I can see two different approaches: > > > > 1) Switch on another HTTP header (but which one? accept-language? a custom one?) > > No > > > > > 2) Use different URIs for different vocabularies (as with the browser solution). > > That is always good design. Variants should be given their own URIs. Explore Content-Location header aspects for this. > > > > > 3) Include both vocabularies in the response. > > No > > > > > If different URIs are used then we need to be able to discover them. This could be done with a link header/HTML anchors + link relation types. > > Check the Alternates header > > > The downside is (as always with different URIs) that we don't have one single URL that represents the sales order - unless we accept the overhead of always doing a two-step fetch, and store the base URI, fetch that, look for link-rels and fetch the required representation. > > No problems here - see all hints above. Conneg already handles these things well. (Digest the all the specs before you optimize ;-) > > > > > Personally I would rather avoid the two-step fetch and do vocabulary negotiation using headers - but, on the other hand, this is perfectly acceptable in a browser-based scenario, so why not also use it in M2M scenarios (caching of the first step would even reduce the problem). > > See above; HTTP & friends already give you all you need. > > > > > Somehow it seems like HTTP is missing a way of doing sub-content negotiation, where the request defines the accepted media type, and together with this also defines the acceptable sub-content-format/vocabulary. > > There is AFAIK no notion of 'sub-content-format/vocabulary'. If you must insist on sth like this, check specs like Accept-Features header or the various Profile-mechanisms. (Sorry, no time to dig up the links). > > > > > According to RFC 2616 (HTTP 1.1) the accept header can take a rather generic extension ";token = token" (for instance ";level=1"), but the standard fails to explain what it is supposed to be used for. Can it be used for exactly the scenario above? > > > > Comments? > > > > Think hard whether you actually have the problems you think you have. Meanwhile, give the Enterpriseys in your team something else to chew on :-) > > > Jan > > > Thanks, Jørn > > > > >
Jan Algermissen wrote: > > Applying REST to enterprise IT problems does not mean tweaking REST, > it means changing your view of the problem. > +5 -Eric
On Dec 27, 2011, at 1:13 PM, Jorn Wildt wrote: > > Why don't you leverage an existing format, UBL in this case? > > That's cheating and avoiding the question :-) The assumption is that we actually have two different formats to support. According to Wikipedia that is not even so unrealistic - apparently there are regional variations of it (http://en.wikipedia.org/wiki/Universal_Business_Language). Then I maybe misread your question - I did not understand your were focussing on handing the regional variants. OTH, I'd start with the assumption that two variants are needed. Can't you a) control the format in the first place or b) allow for the variations within the same overall format? And let the client figure out its desired data? > > > > First of all, we cannot do content negotiation and switch on the media type because 1) we don't want to mint new domain specific media types, > > > > I'd suggest you do. It has no significant cost and allows you to work within the Web's architecture with easy. > > It should be obvious, from the previous discussions in this group, that different schools exists regarding this issue. I don't think that there are really two schools when it comes to embedding stuff like complete order data within e.g. HTML. Maybe post an example how this would look? > Right now I am trying to make up my own mind whether I belong to one or the other school. This question is for the "do not mint new media types"-school. > > > (e.g. why do you consider media type bloat a problem?) > > I haven't yet decided whether it is a problem or not :-) Right now I am exploring the consequences of the different approaches. One thing that bothers me though, is that media types are considered protocol layer information. For me that means putting domain specific knowledge into it is like asking the TCP/IP stack to be aware of Cats, Dogs, Sales orders and what ever else we come up with. Note that HTTP *is* the application layer. It is not transport. The need for media types is simply the consequence of the uniform interface. Knowing about cats is just the same as knowing about images of style sheets in the HTML case. Its fine. Just don't describe functionality (what a server will actually *do) in the media type. > > Thanks for the pointers to other headers! > I think example messages would help to communicate your 'problem'. Jan > /Jørn > > --- In rest-discuss@yahoogroups.com, Jan Algermissen <jan.algermissen@...> wrote: > > > > > > On Dec 27, 2011, at 10:40 AM, Jorn Wildt wrote: > > > > > Assume we have a resource with a sales order. Such a sales order can be represented for an end-user as either HTML or a spreadsheet. With content negotiation thats easily done with accept headers for text/html or application/vnd.ms-excel (or similar). But browsers don't let the end-user switch accept headers, so instead we define HTML as the default type, and add a link to where the spreadsheet can be downloaded. > > > > > > Now we add M2M support and decide to represent the sales order in HAL or maybe even in HTML using RDFa or some microformat. But we must support two different ways of embedding domain specific data - either the (fictional) Scandinavian Sales Order Vocabulary or the (also fictional) US Sales Order Vocabulary. How should we approach that problem? > > > > Why don't you leverage an existing format, UBL in this case? UBL has no associated media type (yet) but that should not stop you from going ahead an minting a name for your purposes. > > > > > > > > First of all, we cannot do content negotiation and switch on the media type because 1) we don't want to mint new domain specific media types, > > > > I'd suggest you do. It has no significant cost and allows you to work within the Web's architecture with easy. Why create fixes for problems that do not exist? (Smells enterprisey :-) > > > > > and 2) because we want to use the same media type but with different vocabularies/schemas. > > > > Huh? > > > > > > > > Second, the prerequisites might even be wrong: is supporting multiple vocabularies the right thing to do? Lets assume it is right and continue on (but you are most welcome to question this!). > > > > My personal advice: Embrace the Web, do simple, straight forward stuff and defer fixing problems until the exist (e.g. why do you consider media type bloat a problem?) > > > > Applying REST to enterprise IT problems does not mean tweaking REST, it means changing your view of the problem. > > > > > > > > I can see two different approaches: > > > > > > 1) Switch on another HTTP header (but which one? accept-language? a custom one?) > > > > No > > > > > > > > 2) Use different URIs for different vocabularies (as with the browser solution). > > > > That is always good design. Variants should be given their own URIs. Explore Content-Location header aspects for this. > > > > > > > > 3) Include both vocabularies in the response. > > > > No > > > > > > > > If different URIs are used then we need to be able to discover them. This could be done with a link header/HTML anchors + link relation types. > > > > Check the Alternates header > > > > > The downside is (as always with different URIs) that we don't have one single URL that represents the sales order - unless we accept the overhead of always doing a two-step fetch, and store the base URI, fetch that, look for link-rels and fetch the required representation. > > > > No problems here - see all hints above. Conneg already handles these things well. (Digest the all the specs before you optimize ;-) > > > > > > > > Personally I would rather avoid the two-step fetch and do vocabulary negotiation using headers - but, on the other hand, this is perfectly acceptable in a browser-based scenario, so why not also use it in M2M scenarios (caching of the first step would even reduce the problem). > > > > See above; HTTP & friends already give you all you need. > > > > > > > > Somehow it seems like HTTP is missing a way of doing sub-content negotiation, where the request defines the accepted media type, and together with this also defines the acceptable sub-content-format/vocabulary. > > > > There is AFAIK no notion of 'sub-content-format/vocabulary'. If you must insist on sth like this, check specs like Accept-Features header or the various Profile-mechanisms. (Sorry, no time to dig up the links). > > > > > > > > According to RFC 2616 (HTTP 1.1) the accept header can take a rather generic extension ";token = token" (for instance ";level=1"), but the standard fails to explain what it is supposed to be used for. Can it be used for exactly the scenario above? > > > > > > Comments? > > > > > > > Think hard whether you actually have the problems you think you have. Meanwhile, give the Enterpriseys in your team something else to chew on :-) > > > > > > Jan > > > > > Thanks, Jørn > > > > > > > > > >
> > It should be obvious, from the previous discussions in this group, that different schools exists regarding this issue. > > I don't think that there are really two schools when it comes to embedding stuff like complete order data within e.g. HTML. Maybe post an example how this would look? Sorry if I wasn't clear about what "this issue" refers to: I was talking about minting new domain specific media types. Like having a sales order specific media type as for instance "application/vnd.mycompany.salesorder+xml". > I think example messages would help to communicate your 'problem'. Well, the examples are fictive from the start, so I cannot give that. As I said, I am exploring the consequences of *not* minting new media types and wondering how one would then do content/vocabulary negotiation. /Jørn
On Dec 27, 2011, at 2:28 PM, Jorn Wildt wrote: > > Well, the examples are fictive from the start, so I cannot give that. As I said, I am exploring the consequences of *not* minting new media types and wondering how one would then do content/vocabulary negotiation. Ah, ok. BTW, you name one reason here, how non-specific media types+link rels and friends take away control that Web arch actually intends to give you). Besides Accept-Features you could do sth. like this: GET /orders/1 Accept: application/xhtml;profile=us-order 200 Ok Content-Type: application/xhtml However, I am not a fan of this because you loose visibility (media type parameters other than 'q' are opaque to conneg) and you also do not know which intermediary will silently strip the param (which it may). I also think you simply end up creating the bloat you are trying to avoid in another place (the profile or feature tag). My advice remains: mint a media type, make it flexible enough to incorporate whatever variations you might have and have the clients deal with picking the stuff they want. Jan P.S. You can also do Conneg on User-Agent but I have not yet made up my mind whether that is a good idea. > > /Jørn > >
<snip> I am exploring the consequences of *not* minting new media types and wondering how one would then do content/vocabulary negotiation. </snip> It's not clear to me that *not* minting media types *requires* content/vocabulary negotiation. why have you chosen to explore these to things in this related way? mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Tue, Dec 27, 2011 at 10:45, Jan Algermissen <jan.algermissen@...> wrote: > > On Dec 27, 2011, at 2:28 PM, Jorn Wildt wrote: >> >> Well, the examples are fictive from the start, so I cannot give that. As I said, I am exploring the consequences of *not* minting new media types and wondering how one would then do content/vocabulary negotiation. > > Ah, ok. BTW, you name one reason here, how non-specific media types+link rels and friends take away control that Web arch actually intends to give you). > > Besides Accept-Features you could do sth. like this: > > GET /orders/1 > Accept: application/xhtml;profile=us-order > > 200 Ok > Content-Type: application/xhtml > > However, I am not a fan of this because you loose visibility (media type parameters other than 'q' are opaque to conneg) and you also do not know which intermediary will silently strip the param (which it may). > > I also think you simply end up creating the bloat you are trying to avoid in another place (the profile or feature tag). > > My advice remains: mint a media type, make it flexible enough to incorporate whatever variations you might have and have the clients deal with picking the stuff they want. > > Jan > > P.S. You can also do Conneg on User-Agent but I have not yet made up my mind whether that is a good idea. > > > > >> >> /Jørn >> >> > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Tue, Dec 27, 2011 at 9:40 AM, Jorn Wildt <jw@...> wrote: > 3) Include both vocabularies in the response. Including both vocabularies in text/html seems to me the solution that most approaches HATEOS. It's quite common in HTML to have to represent the same thing two ways for different audiences/clients. e.g. an image, and an "alt" attribute e.g. a Flash object, and a fallback This seems little different. If you also want to provide more concise representations, you can always provide distinct URLs for XML/JSON/whatever representation linked from the text/html hypermedia with an "alternate" relationship. -- Benjamin Hawkes-Lewis
On Dec 27, 2011, at 4:51 PM, mike amundsen wrote: > <snip> > I am exploring the consequences of *not* minting new media types and > wondering how one would then do content/vocabulary negotiation. > </snip> > > It's not clear to me that *not* minting media types *requires* > content/vocabulary negotiation. why have you chosen to explore these > to things in this related way? AFAIU Jorn (thinks he) needs different (sort of distinct) formats for different consumers. When shoving both variants in a single host media type he (thinks he) needs a means to negotiate between them based on the consumer. Jan > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > On Tue, Dec 27, 2011 at 10:45, Jan Algermissen > <jan.algermissen@...> wrote: > > > > On Dec 27, 2011, at 2:28 PM, Jorn Wildt wrote: > >> > >> Well, the examples are fictive from the start, so I cannot give that. As I said, I am exploring the consequences of *not* minting new media types and wondering how one would then do content/vocabulary negotiation. > > > > Ah, ok. BTW, you name one reason here, how non-specific media types+link rels and friends take away control that Web arch actually intends to give you). > > > > Besides Accept-Features you could do sth. like this: > > > > GET /orders/1 > > Accept: application/xhtml;profile=us-order > > > > 200 Ok > > Content-Type: application/xhtml > > > > However, I am not a fan of this because you loose visibility (media type parameters other than 'q' are opaque to conneg) and you also do not know which intermediary will silently strip the param (which it may). > > > > I also think you simply end up creating the bloat you are trying to avoid in another place (the profile or feature tag). > > > > My advice remains: mint a media type, make it flexible enough to incorporate whatever variations you might have and have the clients deal with picking the stuff they want. > > > > Jan > > > > P.S. You can also do Conneg on User-Agent but I have not yet made up my mind whether that is a good idea. > > > > > > > > > >> > >> /Jørn > >> > >> > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > >
<snip> > AFAIU Jorn (thinks he) needs different (sort of distinct) formats for different consumers. When shoving both variants in a single host media type he (thinks he) needs a means to negotiate between them based on the consumer. </snip> Jorn: you are contemplating a single media-type w/ multiple negotiable "formats"? is that right? one, i don't understand the use of the word "format" here (in my mind XML, JSON, CSV are formats so i need help here) two, if you are using a single media type, what aspect of the representation response are you planning on "negotiating"? three, what kinds of _consumers_ are you contemplating that would use the same media type, but need/want to negotiate for different "formats"? mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Tue, Dec 27, 2011 at 14:37, Jan Algermissen <jan.algermissen@...> wrote: > > On Dec 27, 2011, at 4:51 PM, mike amundsen wrote: > >> <snip> >> I am exploring the consequences of *not* minting new media types and >> wondering how one would then do content/vocabulary negotiation. >> </snip> >> >> It's not clear to me that *not* minting media types *requires* >> content/vocabulary negotiation. why have you chosen to explore these >> to things in this related way? > > AFAIU Jorn (thinks he) needs different (sort of distinct) formats for different consumers. When shoving both variants in a single host media type he (thinks he) needs a means to negotiate between them based on the consumer. > > Jan > > > >> >> mca >> http://amundsen.com/blog/ >> http://twitter.com@mamund >> http://mamund.com/foaf.rdf#me >> >> On Tue, Dec 27, 2011 at 10:45, Jan Algermissen >> <jan.algermissen@...> wrote: >> > >> > On Dec 27, 2011, at 2:28 PM, Jorn Wildt wrote: >> >> >> >> Well, the examples are fictive from the start, so I cannot give that. As I said, I am exploring the consequences of *not* minting new media types and wondering how one would then do content/vocabulary negotiation. >> > >> > Ah, ok. BTW, you name one reason here, how non-specific media types+link rels and friends take away control that Web arch actually intends to give you). >> > >> > Besides Accept-Features you could do sth. like this: >> > >> > GET /orders/1 >> > Accept: application/xhtml;profile=us-order >> > >> > 200 Ok >> > Content-Type: application/xhtml >> > >> > However, I am not a fan of this because you loose visibility (media type parameters other than 'q' are opaque to conneg) and you also do not know which intermediary will silently strip the param (which it may). >> > >> > I also think you simply end up creating the bloat you are trying to avoid in another place (the profile or feature tag). >> > >> > My advice remains: mint a media type, make it flexible enough to incorporate whatever variations you might have and have the clients deal with picking the stuff they want. >> > >> > Jan >> > >> > P.S. You can also do Conneg on User-Agent but I have not yet made up my mind whether that is a good idea. >> > >> > >> > >> > >> >> >> >> /Jørn >> >> >> >> >> > >> > >> > >> > ------------------------------------ >> > >> > Yahoo! Groups Links >> > >> > >> > >> >
I've been thinking about this lately myself in relation to the HAL
specification. The media type specifies the serialization <resource>
<link> and how things fit together, but something "else" ( vocabulary here
seems an excellent fit ) defines/declares that said resource includes a
<name>, <age>, <country> property.
Initially the main area I was thinking about this was in HAL's embedded
resources, which may only contain a sub-set/partial representation of the
embedded resource ( the full one being at its URL ).
The two resources are in the same media-type/serialization format, but
differ in vocabulary. Personally this feels like an ideal fit for XML
namespaces, but as HAL also has a JSON variant it doesn't really fit well
there, unless one continues to notion of CURIEs down to properties:
{
"md:age": 37,
...
}
and the that vocabulary is declared inside the resources payload ( HAL used
to have _curies as a key for that, but that seems to have disappeared from
the examples on the site. ).
This allows a client to be able to continue parsing, and navigating links
by ONLY knowing HAL, but to do anything further, one needs to understand
the vocabulary, or "local lingo" as it were.
--
"Great artists are extremely selfish and arrogant things" — Steven Wilson,
Porcupine Tree
On Tue, Dec 27, 2011 at 10:40 PM, Jorn Wildt <jw@...> wrote:
> Now we add M2M support and decide to represent the sales order in HAL or
> maybe even in HTML using RDFa or some microformat. But we must support two
> different ways of embedding domain specific data - either the (fictional)
> Scandinavian Sales Order Vocabulary or the (also fictional) US Sales Order
> Vocabulary. How should we approach that problem?
>
> It's not clear to me that *not* minting media types *requires* > content/vocabulary negotiation. It does not *require*. That's not what I am saying. Lets take it from the beginning: 1) Assume minting new media types is a no-go. Lets encode the domain data in HTML (or HAL) - a well known media format with built-in hypermedia controls. (You were actually the one that got me started on this in http://tech.groups.yahoo.com/group/rest-discuss/message/18126 with the comment "and i think someone kicks a cat, too [when a new media type is created]" :-) 2) Now we need to embed a sales order or a case file or some other domain specific data in the existing media type. Unfortunately we need to support different vocabularies: the US vocabulary, the European Union vocabulary (both fictive), some subset of UBL or the Good Relations vocabulary(*) 3) How does the client now inform the server of which vocabulary it understands? The client can ask for text/html - but where should it put the "and-please-serve-it-as-good-relations"-requirement? As always - thanks for taking the time to discuss this. /Jørn (*) Never mind what exactly it is. The point is that we could have many different ways of embedding the same data in the same media type.
> you could do sth. like this: > > GET /orders/1 > Accept: application/xhtml;profile=us-order > > 200 Ok > Content-Type: application/xhtml > > However, I am not a fan of this because you loose visibility (media type parameters other than 'q' are opaque to conneg) and you also do not know which intermediary will silently strip the param (which it may). That is although what I would consider a solution that fits exactly my requirements. Can you explain what the parameter can get stripped (or share a link)? Thanks, Jørn
> Jorn: you are contemplating a single media-type w/ multiple negotiable > "formats"? is that right? Yup. But read "Format" as "Vocabulary" or "encoding-in-existing-media-type" and see my answer in http://tech.groups.yahoo.com/group/rest-discuss/message/18172 /Jørn
Let me just give another example to show that this is not about XML alone: 1) Assume we have a resource with a sales order. 2) The client wants the sales order as a spreadsheet so it GETs it with accept header "application/vnd.ms-excel". 3) But we have many different variations of the spreadsheet, so how does the client inform the server of what variation it wants? The media type will always be "application/vnd.ms-excel" no matter what, so we cannot change/switch on that. 4) Exactly the same argument goes for representing the sales order in HTML, HAL, and, well, even as a PDF or an image! I guess it boils down to a question of what kind of variations we need. Let me give some examples: A) Different languages (english, turkish, what-ever). For this we can use the HTTP header Accept-Language. B) Different content (with or without summaries, full address, detailed information etc.). In this case I would say we declare them as different resources and need different URIs, for instance with a "?format=xxx" parameter. C) Different layout but with the same content. For instance a new version of the spreadsheet with the same data but in different cells. Typically we would mint a new version of the media type - but that certainly doesn't make sense for a spreadsheet! /Jørn
> > Applying REST to enterprise IT problems does not mean tweaking REST, > > it means changing your view of the problem. > > +5 And thanks a lot for helping me/the community do that :-) /Jørn
On Tue, Dec 27, 2011 at 11:28 PM, Jorn Wildt <jw@...> wrote: > ** > > > Let me just give another example to show that this is not about XML alone: > > 1) Assume we have a resource with a sales order. > > 2) The client wants the sales order as a spreadsheet so it GETs it with > accept header "application/vnd.ms-excel". > > 3) But we have many different variations of the spreadsheet, so how does > the client inform the server of what variation it wants? The media type > will always be "application/vnd.ms-excel" no matter what, so we cannot > change/switch on that. > > Why not? Even if you buy into the "new media types are evil" meme (which I don't), what's wrong with a content type of "application/vnd.ms-excel;vocabulary+hal"? If you don't like that, and are still stuck on a single unchangeable content type, then it seems like a different URI per vocabulary would be the right answer (maybe appending ".hal" as an extension or something like that). > 4) Exactly the same argument goes for representing the sales order in > HTML, HAL, and, well, even as a PDF or an image! > > If you're going to offer the same data as either a spreadsheet or a PDF, that seems like an obvious case where you'll want to use the media type as the basis for conneg. If you're going to go that far, adding parameters to the media types for the vocabulary isn't such a stretch. > I guess it boils down to a question of what kind of variations we need. > Let me give some examples: > > A) Different languages (english, turkish, what-ever). For this we can use > the HTTP header Accept-Language. > > B) Different content (with or without summaries, full address, detailed > information etc.). In this case I would say we declare them as different > resources and need different URIs, for instance with a "?format=xxx" > parameter. > > Different *content* or different *formats* for the same content? In the former case, I'd vote for different URIs. In the latter case, I wouldn't be shy about something like "application/vnd.ms-excel+hal"? Of course, this just illustrates that you're inventing a conneg problem that would already be solved if you were willing to mint new media types. > C) Different layout but with the same content. For instance a new version > of the spreadsheet with the same data but in different cells. Typically we > would mint a new version of the media type - but that certainly doesn't > make sense for a spreadsheet! > > Again, why not? One of the principles of REST is that a single URI represents a single resource. But if you want variations of the data being returned, that should really be either separate URIs (i.e. separate resources), or the same URI with connneg. I can't really see how doing conneg on things other than Content-Type and the Accept headers is really going to increase interop at all. In the particular case of a spreadsheet, one could certainly argue that the client should be able to understand the column headers that are actually included in any particular representation. Indeed, this is the exact argument that is made for using HTML or XHTML as a syntax -- a client can theoretically be expected to understand how to parse a <table> element and its sub-elements to look for <th> headers and derive the same information. In my experience, however, understanding the *syntax* at this level is the 20% problem ... the 80% problem is understanding the semantics, which I have found to be pretty much independent of the representation syntax. As an example, in the Java world there are many quite capable libraries that can serialize a given Java object structure to either JSON or XML with very little effort on the part of the server implementor, which allows the same service to serve up "application/xml" or "application/json" representations of the same resource quite readily. But neither this, nor encapsulating a sales order in an HTML/XHTML table, communicates the business rule that the line item total on an invoice row should equal the quantity times the unit price. > /Jørn > > Craig McClanahan > >
On Wed, Dec 28, 2011 at 12:02 AM, Craig McClanahan <craigmcc@...>wrote: > > > On Tue, Dec 27, 2011 at 11:28 PM, Jorn Wildt <jw@...> wrote: > >> ** >> >> >> Let me just give another example to show that this is not about XML >> alone: >> >> 1) Assume we have a resource with a sales order. >> >> 2) The client wants the sales order as a spreadsheet so it GETs it with >> accept header "application/vnd.ms-excel". >> >> 3) But we have many different variations of the spreadsheet, so how does >> the client inform the server of what variation it wants? The media type >> will always be "application/vnd.ms-excel" no matter what, so we cannot >> change/switch on that. >> >> Why not? Even if you buy into the "new media types are evil" meme (which > I don't), what's wrong with a content type of > "application/vnd.ms-excel;vocabulary+hal"? > Sorry, in this context that should actually be "application/vnd.ms-excel;vocabulary=hal". Craig > > If you don't like that, and are still stuck on a single unchangeable > content type, then it seems like a different URI per vocabulary would be > the right answer (maybe appending ".hal" as an extension or something like > that). > > >> 4) Exactly the same argument goes for representing the sales order in >> HTML, HAL, and, well, even as a PDF or an image! >> >> If you're going to offer the same data as either a spreadsheet or a PDF, > that seems like an obvious case where you'll want to use the media type as > the basis for conneg. If you're going to go that far, adding parameters to > the media types for the vocabulary isn't such a stretch. > >> I guess it boils down to a question of what kind of variations we need. >> Let me give some examples: >> >> A) Different languages (english, turkish, what-ever). For this we can use >> the HTTP header Accept-Language. >> >> B) Different content (with or without summaries, full address, detailed >> information etc.). In this case I would say we declare them as different >> resources and need different URIs, for instance with a "?format=xxx" >> parameter. >> >> Different *content* or different *formats* for the same content? In the > former case, I'd vote for different URIs. In the latter case, I wouldn't > be shy about something like "application/vnd.ms-excel+hal"? Of course, > this just illustrates that you're inventing a conneg problem that would > already be solved if you were willing to mint new media types. > >> C) Different layout but with the same content. For instance a new version >> of the spreadsheet with the same data but in different cells. Typically we >> would mint a new version of the media type - but that certainly doesn't >> make sense for a spreadsheet! >> >> Again, why not? One of the principles of REST is that a single URI > represents a single resource. But if you want variations of the data being > returned, that should really be either separate URIs (i.e. separate > resources), or the same URI with connneg. I can't really see how doing > conneg on things other than Content-Type and the Accept headers is really > going to increase interop at all. > > In the particular case of a spreadsheet, one could certainly argue that > the client should be able to understand the column headers that are > actually included in any particular representation. Indeed, this is the > exact argument that is made for using HTML or XHTML as a syntax -- a client > can theoretically be expected to understand how to parse a <table> element > and its sub-elements to look for <th> headers and derive the same > information. > > In my experience, however, understanding the *syntax* at this level is the > 20% problem ... the 80% problem is understanding the semantics, which I > have found to be pretty much independent of the representation syntax. As > an example, in the Java world there are many quite capable libraries that > can serialize a given Java object structure to either JSON or XML with very > little effort on the part of the server implementor, which allows the same > service to serve up "application/xml" or "application/json" representations > of the same resource quite readily. But neither this, nor encapsulating a > sales order in an HTML/XHTML table, communicates the business rule that the > line item total on an invoice row should equal the quantity times the unit > price. > >> /Jørn >> >> Craig McClanahan > > >> >> > >
> > Even if you buy into the "new media types are evil" meme (which > I don't), what's wrong with a content type of > "application/vnd.ms-excel;vocabulary+hal"? Is that doable? It certainly looks like a good solution - but is it generally accepted (a standard) that "application/vnd.ms-excel;vocabulary+hal" is the same as "application/vnd.ms-excel" when it comes to parsing the content? Will for instance Microsoft Excel open up from my browser if I clicked on a link the returned "application/vnd.ms-excel;vocabulary+hal"? /Jørn
On Wed, Dec 28, 2011 at 12:10 AM, Jorn Wildt <jw@...> wrote: > ** > > > > > Even if you buy into the "new media types are evil" meme (which > > I don't), what's wrong with a content type of > > "application/vnd.ms-excel;vocabulary+hal"? > > Is that doable? It certainly looks like a good solution - but is it > generally accepted (a standard) that > "application/vnd.ms-excel;vocabulary+hal" is the same as > "application/vnd.ms-excel" when it comes to parsing the content? Will for > instance Microsoft Excel open up from my browser if I clicked on a link the > returned "application/vnd.ms-excel;vocabulary+hal"? > > It might be even more interesting to determine if Microsoft Excel would open up for a media type like "application/vnd.ms-excel+hal" just like it would for "application/vnd.ms-excel", which is stylistically similar to a very large number of REST APIs with media types "foo+json" and "foo+xml" to distinguish the actual syntax. That's going to be up to the browser implementation, of course. At a deeper level, though, we are now discussing a constraint that is based on real world browser implementations, rather than REST theory or best practices. In my world, all the REST interactions in a client (browser) side app are done via Ajax calls, rather than depending directly on the browser to be a good citizen. Personally, I would do everything I could to take the browser implementation quirks out of the equation, and have your Javascript code that receives the Ajax responses be the thing that determines what happens next. That way, you can be as "purist" or "realist" as you want in your server side implementation, and let the client deal with the complexity of whether this is IE or Firefox or Chrome. > /Jørn > Craig McClanahan
On Tue, Dec 27, 2011 at 3:51 PM, mike amundsen <mamund@...> wrote: > <snip> > I am exploring the consequences of *not* minting new media types and > wondering how one would then do content/vocabulary negotiation. > </snip> > > It's not clear to me that *not* minting media types *requires* > content/vocabulary negotiation. why have you chosen to explore these > to things in this related way? > +1 This assumption seems to be the crux of the problem
Reading up on the profile attribute (http://www.ietf.org/rfc/rfc3236.txt). Seems like a good choice for the purpose of defining the vocabulary/encoding used to embed data in XHTML. The spec says though "It is primarily targeted for use on the network by proxies in the HTTP chain that manipulate data formats (such as transcoders).". My application is not a proxy - so maybe this is what is meant by "you also do not know which intermediary will silently strip the param (which it may)"? /Jørn
As a subscriber of this 'meme' Craig mentioned, I'll give a few of the reasons using a generic media type is the better option: - they avoid you having to reinvent the wheel (i.e. linking to and embedding of resources) - they bring existing client/server tooling that can be re-used in your application - as further tooling is developed and improved over time your application will benefit - they avoid the temptation to type resources via the media type identifier - they establish a ubiquitous interface against which more sophisticated clients/servers/intermediary mechanisms can emerge Is someone able to put together a similar list for the "new media types are awesome" meme? Cheers, Mike
On Dec 28, 2011, at 10:44 AM, Mike Kelly wrote: > As a subscriber of this 'meme' Craig mentioned, I'll give a few of the > reasons using a generic media type is the better option: You sure meant to say that this is *your opinion*, eh? > > - they avoid you having to reinvent the wheel (i.e. linking to and > embedding of resources) > > - they bring existing client/server tooling that can be re-used in > your application > > - as further tooling is developed and improved over time your > application will benefit You get all of the above by using existing syntaxes in new, specific media types. > > - they avoid the temptation to type resources via the media type identifier That 'temptation' can better be cured by educating people about good media type design. > > - they establish a ubiquitous interface against which more > sophisticated clients/servers/intermediary mechanisms can emerge We already have that interface: HTTPs uniform interface. Adding yet another level of uniformness just moves the specifics elsewhere. It does not change the fact that you need specifics. Jan > > Is someone able to put together a similar list for the "new media > types are awesome" meme? > > Cheers, > Mike >
On Wed, Dec 28, 2011 at 10:28 AM, Jan Algermissen <jan.algermissen@...> wrote: > We already have that interface: HTTPs uniform interface. Adding yet another level of uniformness just moves the specifics elsewhere. It does not change the fact that you need specifics. That's generally true for M2M at this point. Human beings, however, can adapt to new HTML interfaces because they are self-describing. -- Benjamin Hawkes-Lewis
On Wed, Dec 28, 2011 at 10:28 AM, Jan Algermissen <jan.algermissen@nordsc.com> wrote: > > On Dec 28, 2011, at 10:44 AM, Mike Kelly wrote: > >> >> - they avoid you having to reinvent the wheel (i.e. linking to and >> embedding of resources) >> >> - they bring existing client/server tooling that can be re-used in >> your application >> >> - as further tooling is developed and improved over time your >> application will benefit > > You get all of the above by using existing syntaxes in new, specific media types. > Sort of.. just in a more convoluted, less convenient way. You as the server need to spend the effort composing your new type, and any clients then consuming that media type would need to unravel your composition at the other end - all of this is necessary before actually tackling the application. Whereas a complete generic type designed for the purpose of exposing a REST API will save both server and clients that kind of bother, and afaict that can only be a good thing! >> >> - they avoid the temptation to type resources via the media type identifier > > That 'temptation' can better be cured by educating people about good media type design. > “Design depends largely on constraints.” -- Charles Eames >> >> - they establish a ubiquitous interface against which more >> sophisticated clients/servers/intermediary mechanisms can emerge > > We already have that interface: HTTPs uniform interface. Adding yet another level of uniformness just moves the specifics elsewhere. It does not change the fact that you need specifics. > This isn't the case for HTML, why should m2m be different? By specifics I assume you mean the actual application semantics? In HAL's case, the application semantics are moved (on purpose) into the link relations. This means applications exposed with HAL must be defined/governed in terms of link relations, the key benefits of this are covered in Mark Nottingham's post "Web API Versioning Smackdown"[1]. [1] http://www.mnot.net/blog/2011/10/25/web_api_versioning_smackdown
On Wed, Dec 28, 2011 at 8:28 AM, Jorn Wildt <jw@...> wrote: > ** > > 1) Assume we have a resource with a sales order. > [...] > I guess it boils down to a question of what kind of variations we need. > Let me give some examples: > [...] > C) Different layout but with the same content. For instance a new version > of the spreadsheet with the same data but in different cells. Typically we > would mint a new version of the media type - but that certainly doesn't > make sense for a spreadsheet! > If you have different layouts (or even just different styles) of cells in two spreadsheets, I would not give them the same URI. A URI may identify one and only one resource, but a resource may be identified by any number of URIs. That's a fact of life; one of the a trade offs of combining the network address with the resource identifier. IMHO that would go for the XML vocabularies too: The "EU sales order" and the "US sales order" (or whatever the hypothetical case was) don't _have_ to have the same URI, even though they come from the same rows in the same database tables. Solving the problem of linking/navigating to the "right" resource is a lot easier; hypermedia. Clients can't depend on all players always using the one true URI of any resource to identify it; witness tinyurl.com and derivatives. Different layout with the same content... Heard that before: WML. That worked out well? ;-) How do sites really handle it? Usually using the same media types, but using "m.example.com" subdomains, i.e. different URIs for the same "resource". -- -mogsie-
On Dec 28, 2011, at 11:42 AM, Benjamin Hawkes-Lewis wrote: > On Wed, Dec 28, 2011 at 10:28 AM, Jan Algermissen > <jan.algermissen@...> wrote: >> We already have that interface: HTTPs uniform interface. Adding yet another level of uniformness just moves the specifics elsewhere. It does not change the fact that you need specifics. > > That's generally true for M2M at this point. > > Human beings, however, can adapt to new HTML interfaces because they > are self-describing. I tend to avoid that distinction altogether. User agents are components that act on behalf of some user. User agents execute any number of requests automatically (depending on their implementation and configuration) until they reach a steady state. At that point they hand back use case control to the user (the primary actor in use case speak). It does not really matter, whether the automatic steps are image or style sheet retrieval, form submissions, following redirects or comparing product prices from several shops and submitting an order for the cheapest offer. Either way, the use agent will end up in a steady state (be that an Ok state or Error state) leaving the user to figure out how to proceed with its intent given the presented state. A browser is as much an M2M thing as a bidding agent. The browser simply hand control to the primary actor more often. Jan > > -- > Benjamin Hawkes-Lewis
On Wed, Dec 28, 2011 at 3:44 PM, Jan Algermissen <jan.algermissen@...> wrote: > > On Dec 28, 2011, at 11:42 AM, Benjamin Hawkes-Lewis wrote: > >> On Wed, Dec 28, 2011 at 10:28 AM, Jan Algermissen >> <jan.algermissen@...> wrote: >>> We already have that interface: HTTPs uniform interface. Adding yet another level of uniformness just moves the specifics elsewhere. It does not change the fact that you need specifics. >> >> That's generally true for M2M at this point. >> >> Human beings, however, can adapt to new HTML interfaces because they >> are self-describing. > > I tend to avoid that distinction altogether. [snip] > The browser simply hand control to the primary actor more often. You can re-express what I said as: "HTML allows easier adaption by naive primary actors than newly minted media types", and my point remains the same. -- Benjamin Hawkes-Lewis
I can think of three key reasons to consider creating new media types: 1 - improve access to protocol-level details available within the message. For example, HTML lacks support for accessing PUT, DELETE, PATCH, and a number of HTTP Headers. 2 - improve the affordances provided by the media type. For example, Atom lacks the affordance for expressing ad-hoc queries (i.e. HTML.FORM@method="get") in representations. 3 - improve the mapping between the problem domain and the message. For example, while implementing a voice response system using HTML is quite possible, the VoiceXML media type offers a more direct mapping between the messages and the problem domain. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Wed, Dec 28, 2011 at 11:06, Benjamin Hawkes-Lewis <bhawkeslewis@...> wrote: > On Wed, Dec 28, 2011 at 3:44 PM, Jan Algermissen > <jan.algermissen@...> wrote: >> >> On Dec 28, 2011, at 11:42 AM, Benjamin Hawkes-Lewis wrote: >> >>> On Wed, Dec 28, 2011 at 10:28 AM, Jan Algermissen >>> <jan.algermissen@...> wrote: >>>> We already have that interface: HTTPs uniform interface. Adding yet another level of uniformness just moves the specifics elsewhere. It does not change the fact that you need specifics. >>> >>> That's generally true for M2M at this point. >>> >>> Human beings, however, can adapt to new HTML interfaces because they >>> are self-describing. >> >> I tend to avoid that distinction altogether. > > [snip] > >> The browser simply hand control to the primary actor more often. > > You can re-express what I said as: "HTML allows easier adaption by > naive primary actors than newly minted media types", and my point > remains the same. > > -- > Benjamin Hawkes-Lewis > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Wed, Dec 28, 2011 at 4:55 PM, mike amundsen <mamund@...> wrote: > ** > > > I can think of three key reasons to consider creating new media types: > > 1 - improve access to protocol-level details available within the message. > For example, HTML lacks support for accessing PUT, DELETE, PATCH, and > a number of HTTP Headers. > Out of interest - do you would prefer a new media type identifier over profiles/conventions and code on demand? Either way, the point is understood. It's interesting that, despite these shortcomings, HTML's persisted that way for as long as it has. I think that speaks to the value of ubiquity in a media type. > 2 - improve the affordances provided by the media type. > For example, Atom lacks the affordance for expressing ad-hoc queries > (i.e. HTML.FORM@method="get") in representations. > > Agree, I'd also add constraints as well as affordances. Fwiw, this is the plan for introducing form-like controls to HAL - it will be a separate media type which extends it, dubbed SHAL. > 3 - improve the mapping between the problem domain and the message. > For example, while implementing a voice response system using HTML is > quite possible, the VoiceXML media type offers a more direct mapping > between the messages and the problem domain. > > Not sure about this one - what is the benefit of this? more concise representations? Cheers, Mike
On Wed, Dec 28, 2011 at 10:44 AM, Mike Kelly <mike@...> wrote: > ** > > > As a subscriber of this 'meme' Craig mentioned, I'll give a few of the > reasons using a generic media type is the better option: > I just want to point out that I subscribe to the "use generic media types" camp (ATM). Is someone able to put together a similar list for the "new media > types are awesome" meme? > Since I'm someone, and since your question was left unanswered, I thought I'd have a stab at providing the equivalent list for the "use (domain) specific media types" camp. I might be way off... > - they avoid the need for completely generic clients, and allow creation of highly specialized clients with a user experience optimized for the task at hand. The benefits of REST can still be harvested within an organization, and perhaps in certain niche spheres. - they avoid the hardships of being boxed inside a generic media type, in that the awesome domain specific media type can be designed without constraints (like how a HTML "serialization" would limits to @class, @rel, <table> and <ol> etc.) - they allow for a certain degree of type safety, in that a response _is_ something domain specific, like an order line. This doesn't have to be inferred from e.g. the referrer / inbound link. - they increase visibility, in that the media type exposes _what_ it is. e.g. in HTML there's no way for an intermediary to know if a response is an order line item or a purchase order. I just thought the fact that your list of "benefits" was a good list, and I disagree with Jan simply saying that "you get the same benefits in the other camp too". -- -mogsie-
On Wed, Dec 28, 2011 at 11:28 AM, Jan Algermissen < jan.algermissen@...> wrote: > > On Dec 28, 2011, at 10:44 AM, Mike Kelly wrote: > > > > - they avoid you having to reinvent the wheel (i.e. linking to and > > embedding of resources) > > > > - they bring existing client/server tooling that can be re-used in > > your application > > > > - as further tooling is developed and improved over time your > > application will benefit > > You get all of the above by using existing syntaxes in new, specific media > types. > > I disagree. If you use HTML with extras (microformats, rdfa) you can use Chrome's inspector to debug. You can use any number of HTML parsers, you can use e.g. CSS selectors (and a CSS selector library) to identify nodes in the tree, you can interact with it on your iPad, you can use any number of server side frameworks to handle HTML generation and form processing. This is all tooling that you get for free, and which you can't re-use with a custom media type. Of course if you use a less common media type like HAL, you're back to square one, but that's true for any new media type. Some of the benefits in HTML apply to using AtomPub... > > - they avoid the temptation to type resources via the media type > identifier > > That 'temptation' can better be cured by educating people about good media > type design. > I'm not sure what the benefit is (Mike) or what good media type design is (Jan) ;-) > - they establish a ubiquitous interface against which more > > sophisticated clients/servers/intermediary mechanisms can emerge > > We already have that interface: HTTPs uniform interface. Adding yet > another level of uniformness just moves the specifics elsewhere. It does > not change the fact that you need specifics. > We all know that (the source code of) a browser doesn't know about banking transactions and book stores, yet the web works. There are specifics, e.g. CSS, JavaScript, PNG, PDF, XSLT, sure. All specific to putting text on screens (or paper) I think we need a discussion / definition on what actually a "specific" media type actually is before we discuss its merits. -- -mogsie-
in the past, i've handled changes to protocol-level items or new affordances by creating new media type definitions. i will note, however, that one of the goals of XHTML 2.0[1] was to create a core media type w/ optional "modules" for various other "features." this never gained traction with browser vendors, tho. > Not sure about this one - what is the benefit of this? more concise representations? the primary benefit is to reduce the abstraction layers between the problem domain and the message: <p class="invoice">...</p> VS <invoice>...</invoice> FWIW, i usually handle problem domain mappings via a "profile" against an existing media type (i.e. XHTML). sometimes, this does not work out (for various reasons) and i author a new media type instead. [1] http://www.w3.org/TR/xhtml2/xhtml2-doctype.html#s_doctype On Wed, Dec 28, 2011 at 12:13, Mike Kelly <mike@...> wrote: > > > On Wed, Dec 28, 2011 at 4:55 PM, mike amundsen <mamund@...> wrote: >> >> >> >> I can think of three key reasons to consider creating new media types: >> >> 1 - improve access to protocol-level details available within the message. >> For example, HTML lacks support for accessing PUT, DELETE, PATCH, and >> a number of HTTP Headers. > > > Out of interest - do you would prefer a new media type identifier over > profiles/conventions and code on demand? in the past, i've handled changes to protocol-level items or new affordances by creating new media type definitions. i will note, however, that one of the goals of XHTML 2.0[1] was to create a code media type w/ "plug-ins" for various other "features." this never gained traction. [1] http://www.w3.org/TR/xhtml2/xhtml2-doctype.html#s_doctype > > Either way, the point is understood. It's interesting that, despite these > shortcomings, HTML's persisted that way for as long as it has. I think that > speaks to the value of ubiquity in a media type. > >> >> 2 - improve the affordances provided by the media type. >> For example, Atom lacks the affordance for expressing ad-hoc queries >> (i.e. HTML.FORM@method="get") in representations. >> > > Agree, I'd also add constraints as well as affordances. > > Fwiw, this is the plan for introducing form-like controls to HAL - it will > be a separate media type which extends it, dubbed SHAL. > >> >> 3 - improve the mapping between the problem domain and the message. >> For example, while implementing a voice response system using HTML is >> quite possible, the VoiceXML media type offers a more direct mapping >> between the messages and the problem domain. >> > > > Cheers, > Mike
Jorn: i must assume (since it's not clear in your messages) that this "single media type, multiple vocabularies" problem means: - you decided to use only one media type - you decided to map your problem domain details to this media type multiple ways (via these "vocabularies") - you actually _need_ to do it this way. first, "vocabularies" is rather vague to me. you name a few (some made up for the case of the discussion), but provide no examples. it is not clear to me how you plan to express the same problem domain information for the same resource using different "vocabularies." in fact, i doubt this scenario is practical in a live system. IOW, i doubt you can successfully create varying representations on the same resource expressing the same problem domain data using different "vocabularies." instead, i suspect that, when using these "vocabularies" to express problem domain specifics, you'll find that the representations vary widely enough that they are not (practically speaking) the same _resource_ at all. i encourage you to prove me wrong on this point by providing some clear examples. second, it sounds (from you responses here) that you imagine multiple clients out there that all understand the same media type, but don't share an understanding of the same vocabularies. you also mention the notion of "negotiating" for vocabularies between client and server. again, i can't conjure up real-life examples of this (feel free to point to some). i think you are positing scenarios that while possible, are unlikely and/or sub-optimal. i think it likely likely that you will encounter clients that differ in the way domain specifics are represented. i those cases, i suggest the most effective way to do that is to provide unique addresses. this is especially true (IMO) when the representation includes "actionable" hypermedia controls upon which the client is expected to act. in my experience, attempts to do "multiple things" while using the same URI are not often successful; esp. when the client is expected to recognize, parse, and act on the response representation. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Wed, Dec 28, 2011 at 09:12, Erik Mogensen <erik@...> wrote: > > > On Wed, Dec 28, 2011 at 8:28 AM, Jorn Wildt <jw@...> wrote: > >> ** >> >> 1) Assume we have a resource with a sales order. >> > [...] > >> I guess it boils down to a question of what kind of variations we need. >> Let me give some examples: >> > [...] > >> C) Different layout but with the same content. For instance a new version >> of the spreadsheet with the same data but in different cells. Typically we >> would mint a new version of the media type - but that certainly doesn't >> make sense for a spreadsheet! >> > > If you have different layouts (or even just different styles) of cells in > two spreadsheets, I would not give them the same URI. > > A URI may identify one and only one resource, but a resource may be > identified by any number of URIs. That's a fact of life; one of the a trade > offs of combining the network address with the resource identifier. > > IMHO that would go for the XML vocabularies too: The "EU sales order" and > the "US sales order" (or whatever the hypothetical case was) don't _have_ > to have the same URI, even though they come from the same rows in the same > database tables. Solving the problem of linking/navigating to the "right" > resource is a lot easier; hypermedia. Clients can't depend on all players > always using the one true URI of any resource to identify it; witness > tinyurl.com and derivatives. > > Different layout with the same content... Heard that before: WML. That > worked out well? ;-) How do sites really handle it? Usually using the > same media types, but using "m.example.com" subdomains, i.e. different > URIs for the same "resource". > -- > -mogsie- > > >
On Wed, Dec 28, 2011 at 11:19 AM, mike amundsen <mamund@...> wrote: > > > Jorn: > > i must assume (since it's not clear in your messages) that this "single > media type, multiple vocabularies" problem means: > - you decided to use only one media type > - you decided to map your problem domain details to this media type > multiple ways (via these "vocabularies") > - you actually _need_ to do it this way. > > first, "vocabularies" is rather vague to me. you name a few (some made up > for the case of the discussion), but provide no examples. it is not clear > to me how you plan to express the same problem domain information for the > same resource using different "vocabularies." in fact, i doubt this > scenario is practical in a live system. IOW, i doubt you can successfully > create varying representations on the same resource expressing the same > problem domain data using different "vocabularies." > I am not sure if these are exactly what Jorn meant but some scenarios i have been considering are: The overlap between the Dublin Core and FOAF vocabularies pretty significant for the HTML+RDFa scenario. Some clients will prefer FOAF terms, others will prefer DC. With RDFa you could annotate using both vocabularies. OTOH, it seems like it would be nice for the client to be able to say "i need FOAF flavored HTML+RDFa" so that the server could say "Not Acceptable" if it only knows how to speak DC (or doesn't annotate using RDFa at all). If you use the semantics of vanilla HTML, rather than RDFa, there are usually several reasonable ways to express domain concepts in HTML. More than once i have switched from rendering data into definition lists to rendering very similar data into tables. It is challenging to write automated clients that would be able to handle such a change seamlessly. (Humans on the other hand couldn't care less.) It seems like it would be nice for the client to be able surface it's requirements to the server. If the whole world always agreed on how to describe problem domains, and never made mistakes, then generic media types would be very compelling. I fear that is not the world i live in, though. As Andrew Tanenbaun said, "the nice thing about standards is that you have so many to choose from". And that assumes there are any standards at all, which is a pretty big assumption for many problem domains. Peter http://barelyenough.org
Peter, your "DC or FOAF" scenario is exactly what I have been talking about. Always nice to know that at least one person in the world understands me :-D
/Jørn
----- Original Message -----
From: Peter Williams
To: rest-discuss@yahoogroups.com
Sent: Wednesday, December 28, 2011 8:18 PM
Subject: Re: [rest-discuss] Re: Different vocabularies, same media type
On Wed, Dec 28, 2011 at 11:19 AM, mike amundsen <mamund@...> wrote:
Jorn:
i must assume (since it's not clear in your messages) that this "single media type, multiple vocabularies" problem means:
- you decided to use only one media type
- you decided to map your problem domain details to this media type multiple ways (via these "vocabularies")
- you actually _need_ to do it this way.
first, "vocabularies" is rather vague to me. you name a few (some made up for the case of the discussion), but provide no examples. it is not clear to me how you plan to express the same problem domain information for the same resource using different "vocabularies." in fact, i doubt this scenario is practical in a live system. IOW, i doubt you can successfully create varying representations on the same resource expressing the same problem domain data using different "vocabularies."
I am not sure if these are exactly what Jorn meant but some scenarios i have been considering are:
The overlap between the Dublin Core and FOAF vocabularies pretty significant for the HTML+RDFa scenario. Some clients will prefer FOAF terms, others will prefer DC. With RDFa you could annotate using both vocabularies. OTOH, it seems like it would be nice for the client to be able to say "i need FOAF flavored HTML+RDFa" so that the server could say "Not Acceptable" if it only knows how to speak DC (or doesn't annotate using RDFa at all).
If you use the semantics of vanilla HTML, rather than RDFa, there are usually several reasonable ways to express domain concepts in HTML. More than once i have switched from rendering data into definition lists to rendering very similar data into tables. It is challenging to write automated clients that would be able to handle such a change seamlessly. (Humans on the other hand couldn't care less.) It seems like it would be nice for the client to be able surface it's requirements to the server.
If the whole world always agreed on how to describe problem domains, and never made mistakes, then generic media types would be very compelling. I fear that is not the world i live in, though. As Andrew Tanenbaun said, "the nice thing about standards is that you have so many to choose from". And that assumes there are any standards at all, which is a pretty big assumption for many problem domains.
Peter
http://barelyenough.org
> it is not clear to me how you plan to express the same problem domain > information for the same resource using different "vocabularies." Well, first of all Peter gave a perfect example with a choice between Dublin Core and FOAF embeded in HTML. I can give another example from my own job: we are working with a case management system. A case file can be represented in HTML in various (realistic) ways: 1) Using micro formats 2) Using RDFa Furthermore, we can either choose to name the case file properties using a Nordic standard, which really doesn't fit our data model very well, or we can name the properties using some other (non-)standard that fits the data model a lot better. All in all we have a minimum of four different ways of expressing the content of a case file using HTML. And thats not counting the gazillion different ways to express stuff in micro formats. Now I want the client to be able to say "I understand HTML and please serve it as a micro format using Nordic standard names". /J�rn ----- Original Message ----- From: mike amundsen To: Erik Mogensen Cc: Jorn Wildt ; rest-discuss@yahoogroups.com Sent: Wednesday, December 28, 2011 7:19 PM Subject: Re: [rest-discuss] Re: Different vocabularies, same media type Jorn: i must assume (since it's not clear in your messages) that this "single media type, multiple vocabularies" problem means: - you decided to use only one media type - you decided to map your problem domain details to this media type multiple ways (via these "vocabularies") - you actually _need_ to do it this way. first, "vocabularies" is rather vague to me. you name a few (some made up for the case of the discussion), but provide no examples. it is not clear to me how you plan to express the same problem domain information for the same resource using different "vocabularies." in fact, i doubt this scenario is practical in a live system. IOW, i doubt you can successfully create varying representations on the same resource expressing the same problem domain data using different "vocabularies." instead, i suspect that, when using these "vocabularies" to express problem domain specifics, you'll find that the representations vary widely enough that they are not (practically speaking) the same _resource_ at all. i encourage you to prove me wrong on this point by providing some clear examples. second, it sounds (from you responses here) that you imagine multiple clients out there that all understand the same media type, but don't share an understanding of the same vocabularies. you also mention the notion of "negotiating" for vocabularies between client and server. again, i can't conjure up real-life examples of this (feel free to point to some). i think you are positing scenarios that while possible, are unlikely and/or sub-optimal. i think it likely likely that you will encounter clients that differ in the way domain specifics are represented. i those cases, i suggest the most effective way to do that is to provide unique addresses. this is especially true (IMO) when the representation includes "actionable" hypermedia controls upon which the client is expected to act. in my experience, attempts to do "multiple things" while using the same URI are not often successful; esp. when the client is expected to recognize, parse, and act on the response representation. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Wed, Dec 28, 2011 at 09:12, Erik Mogensen <erik@...> wrote: On Wed, Dec 28, 2011 at 8:28 AM, Jorn Wildt <jw@...> wrote: 1) Assume we have a resource with a sales order. [...] I guess it boils down to a question of what kind of variations we need. Let me give some examples: [...] C) Different layout but with the same content. For instance a new version of the spreadsheet with the same data but in different cells. Typically we would mint a new version of the media type - but that certainly doesn't make sense for a spreadsheet! If you have different layouts (or even just different styles) of cells in two spreadsheets, I would not give them the same URI. A URI may identify one and only one resource, but a resource may be identified by any number of URIs. That's a fact of life; one of the a trade offs of combining the network address with the resource identifier. IMHO that would go for the XML vocabularies too: The "EU sales order" and the "US sales order" (or whatever the hypothetical case was) don't _have_ to have the same URI, even though they come from the same rows in the same database tables. Solving the problem of linking/navigating to the "right" resource is a lot easier; hypermedia. Clients can't depend on all players always using the one true URI of any resource to identify it; witness tinyurl.com and derivatives. Different layout with the same content... Heard that before: WML. That worked out well? ;-) How do sites really handle it? Usually using the same media types, but using "m.example.com" subdomains, i.e. different URIs for the same "resource". -- -mogsie-
There has been lots of valuable feed back on this thread. Thanks a lot to everybody who has contributed! Hopefully you now understand the problem :-) The suggested solutions has been: 1) Mint a new media type. Apparently there are two different schools here. Some say, yes, go for a new media type. Some say, no don't. And there is no agreement upon which is best. See http://tech.groups.yahoo.com/group/rest-discuss/message/18183 for a discussion of this. 2) Use different URIs for the different vocabularies. This seems like an easy and well understood solution. As Mike puts it: "in my experience, attempts to do "multiple things" while using the same URI are not often successful" (meaning, "go for multiple URIs"). 3) Switch on a media type parameter in the accept header (as for instance "application/xhtml+xml;profile=xxx"). To me this seems like the most elegant solution - if it works. Some people argue that the ";profile=xxx" parameter may get stripped by intermediaries in the network. It has the advantages of minting a new media type, but none of the drawbacks since it is still a well known media type (if this is the right interpretation of the syntax!) 4) Switch on other headers like for instance the user-agent. Not a generally used solution. 5) Use all vocabularies in the same document. To me this seems too clumsy. /Jørn
Jorn:
i think you did a good job of summing up the opinions presented here.
since i don't run into this often in my current work (using a single
media type and multiple "vocabularies" to represent the same problem
domain details), i am most interested in the solution you end up
creating. hopefully, you will be able to share not just the outcome
("we decided to go with 'x' approach and here's why...") but also the
process ("at first we did y, then discovered it didn't work for z, so
we adjusted to x...").
i think your adventure would be very interesting and i suspect many
others would, too.
thanks.
mca
http://amundsen.com/blog/
http://twitter.com@mamund
http://mamund.com/foaf.rdf#me
On Wed, Dec 28, 2011 at 16:22, Jorn Wildt <jw@fjeldgruppen.dk> wrote:
> There has been lots of valuable feed back on this thread. Thanks a lot to everybody who has contributed! Hopefully you now understand the problem :-)
>
> The suggested solutions has been:
>
> 1) Mint a new media type.
>
> Apparently there are two different schools here. Some say, yes, go for a new media type. Some say, no don't. And there is no agreement upon which is best. See http://tech.groups.yahoo.com/group/rest-discuss/message/18183 for a discussion of this.
>
> 2) Use different URIs for the different vocabularies.
>
> This seems like an easy and well understood solution. As Mike puts it: "in my experience, attempts to do "multiple things" while using the same URI are not often successful" (meaning, "go for multiple URIs").
>
> 3) Switch on a media type parameter in the accept header (as for instance "application/xhtml+xml;profile=xxx").
>
> To me this seems like the most elegant solution - if it works. Some people argue that the ";profile=xxx" parameter may get stripped by intermediaries in the network. It has the advantages of minting a new media type, but none of the drawbacks since it is still a well known media type (if this is the right interpretation of the syntax!)
>
> 4) Switch on other headers like for instance the user-agent.
>
> Not a generally used solution.
>
> 5) Use all vocabularies in the same document.
>
> To me this seems too clumsy.
>
> /Jørn
>
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
> > For example, Atom lacks the affordance for expressing ad-hoc queries
> > (i.e. HTML.FORM@method="get") in representations.
> Fwiw, this is the plan for introducing form-like controls to HAL - it will
> be a separate media type which extends it, dubbed SHAL.
Which is a rather interesting point ... let me play the devil's advocate ...
proponents of "do not mint new media types" would argue that this is exactly
why you should stay with the existing media types. Now you spend time and
effort in re-defining what a link is and what a form is - hypermedia
controls that are already well known in (X)HTML.
You could as well have spent the effort on defining a standard generic way
to encode domain data in XHTML such that it would be easy to parse in M2M
scenarios. It could be RDFa or something equivalent to HAL - albeit in
XHTML:
<div class="resource">
<a href="...">...</a>
<a href="...">...</a>
<span class="Name">THansen</span>
<span class="Age">17</span>
<form method="...">...</form>
<div class="resource">
...
</div>
</div>
/J�rn
FWIW, there are times when i am constrained to only use Atom/AtomPub, but still need to support ad-hoc queries (HTML.FORM@method="get") and/or write operations with inline arguments (HTML.FORM@method="post"). in these cases, i usually use the Atom message as a "wrapper" for an embedded payload based on XHTML. the representations are simply slip-streamed into the atom:content element and the client is "taught" to recognize, parse, and activate hypermedia controls within the atom:content element of an Atom response. it's a hack, but it works well. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Wed, Dec 28, 2011 at 17:52, Jørn Wildt <jw@...> wrote: >> > For example, Atom lacks the affordance for expressing ad-hoc queries >> > (i.e. HTML.FORM@method="get") in representations. >> Fwiw, this is the plan for introducing form-like controls to HAL - it will >> be a separate media type which extends it, dubbed SHAL. > > > Which is a rather interesting point ... let me play the devil's advocate ... > proponents of "do not mint new media types" would argue that this is exactly > why you should stay with the existing media types. Now you spend time and > effort in re-defining what a link is and what a form is - hypermedia > controls that are already well known in (X)HTML. > > You could as well have spent the effort on defining a standard generic way > to encode domain data in XHTML such that it would be easy to parse in M2M > scenarios. It could be RDFa or something equivalent to HAL - albeit in > XHTML: > > <div class="resource"> > <a href="...">...</a> > <a href="...">...</a> > <span class="Name">THansen</span> > <span class="Age">17</span> > <form method="...">...</form> > <div class="resource"> > ... > </div> > </div> > > /Jørn >
Allow me to try a debate that is a bit more opiniated than my normal posts: REST will never get any break-through in M2M scenarios due to it's complete lack of interoperability! [ducking for cover] Of course I have no proof for this - I don't even know how widespread it is today ... so feel free to ignore me :-) Some will argue that we have the uniform interface - that's interoperability! To some degree they would be right - in REST (HTTP) we have uniform operations GET/POST etc. (verbs) and combine this with resources (nouns) and get a uniform interface that everybody knows how to use. What is missing in the equation is the formats we use to encode data. There is not only a bunch of different media types - there is also different ways to use those media types. That's where interoperability is missing in REST! As Eric puts it in http://tech.groups.yahoo.com/group/rest-discuss/message/18159 : "even though that isn't how _I_ build systems" and Mike does it in http://tech.groups.yahoo.com/group/rest-discuss/message/18202 : "i usually use the Atom message as a "wrapper" for an embedded payload based on XHTML" - exposing the problem with REST (at least in M2M scenarios) that everybody simply does it their own way - even when sticking to the REST architecture constraints. Some systems are going to use XHTML with RDFa while others will use XHTML + Microformats, HAL, JSON, raw XML and so on. In the end developers will have to handcraft parsers for each and every piece of data on the web - spoiling the fun of having a uniform interface. Compare this to the SOAP world where there is no doubt about what format to use: It is XML/SOAP, you get a WSDL, and from this you auto-generate code and is up and running in a few minutes. There is exactly(*) one way to parse the data and there is good tooling for it. In the beginning this was not so, but then Microsoft, IBM and others sat down and laid out guidelines for interoperability "WS-Interop" (http://en.wikipedia.org/wiki/WS-I), and things got easier. Now, I don't want to turn this into a REST vs. SOAP debate, that's not the point! What I am trying to say is that maybe REST is missing a set of "RS-Interop" guidelines for M2M scenarios? Right now its like in the early days of SOAP where every vendor had their own way of encoding data inside the SOAP envelope. I mean - the community cannot even decide on whether minting new media types is a good or a bad thing! And that's a key feature of REST! Come on - we can do better than that! Neither am I touching upon the problem of semantically understanding the data - that is a different problem that occurs after data has been extracted from the response - and no interop profiles are going to solve that. I am only focusing on getting the numbers, the strings, the dates, the classes and so on out of the response. In the time I have spent trying to understand REST and decide on suitable representations and media types for my work, I could have created five different SOAP APIs of roughly the same complexity. You may call me stupid for that, but at least I don't think I am alone. I fear that REST will be a fad - something (maybe not so) soon to be forgotten - if nothing is done to make it easier to consume the data found inside REST resources. Hopefully it won't be so - personally I love the benefits that we get from REST, but something is missing for it to get its real break-through. And, please tell me that I am wrong ... that would make REST more perfect for me :-) Thanks for listening. /J�rn (*) And, yes, SOAP has its interoperability issues too - I know.
On Dec 29, 2011, at 12:52 AM, Jørn Wildt wrote: > I fear that REST will be a fad - something (maybe not so) soon to be > forgotten - if nothing is done to make it easier to consume the data found > inside REST resources. Hopefully it won't be so - personally I love the > benefits that we get from REST, What are these benefits in your understanding?
On Wed, Dec 28, 2011 at 6:51 PM, Erik Mogensen <erik@...> wrote: > I think we need a discussion / definition on what actually a "specific" media type actually is before we discuss its merits. I fully agree that it would be good to clarify what is meant by a media type. To me it is a representation format type, not a conceptual data type. The HTTP spec, section 3.7 says: "HTTP uses Internet Media Types [17] in the Content-Type (section 14.17) and Accept (section 14.1) header fields in order to provide open and extensible data typing and type negotiation." An Internet Media Type is a file format type or representation format type. It is *not* a conceptual or business data type in the sense of an INTEGER, STRING, HEALTHCARE_RECORD or PURCHASE_ORDER. A format type tells a client how to parse an entity body, not how to interpret it. Maybe "Content-Type" should have been named "Content-Format" instead. With image data the distinction becomes clearer; we can use different representation format types like jpg, gif or png but, the actual conceptual interpretation of the image, ie. "what it is an image of", is out-of bounds for HTTP. Of course at some level any client-server system has to agree on the core business concepts that will be represented by some business specific data type. However at the interface and and representation format level I want to be able to use standard tools and not to risk interfering with the nice properties of HTTP. I think conceptual or business data type information, ie. how an entity body should be interpreted, should be handled as out-of band information of the current version of HTTP. Maybe some syntax convention could be found for combining representation format type *and* conceptual/business data type information in the content-type header, eg "application/json&purchase-order" or else a new HTTP header "Concept-Type" could be introduced that indicates the conceptual/business data type of the entity body, eg "Concept-Type: purchase-order". This would at least give HTTP clients a *hint* of how to interpret the entity body and "Concept-Type" names could also be used in hyperlinks to give clients hints about the interpretation of a resource before having to dereference the hyperlink to that resource. Having said that, I would be satisifed with one single new media type; one that combines JSON:s simple expressiveness with native support for hyperlinks! :-) /Paul -- Paul Cohen www.seibostudios.se mobile: +46 730 787 035 e-mail: paul.cohen@...
Jørn Wildt wrote: > > That's where interoperability is missing in REST! > This has nothing to do with REST. REST is concerned with the communication between components on the network. Having an m2m user understand the media type is another layer, which has nothing to do with communication between components on the network. -Eric
On Wed, Dec 28, 2011 at 9:51 AM, Erik Mogensen <erik@...> wrote: > I disagree. If you use HTML with extras (microformats, rdfa) you can use > Chrome's inspector to debug. You can use any number of HTML parsers, you > can use e.g. CSS selectors (and a CSS selector library) to identify nodes > in the tree, you can interact with it on your iPad, you can use any number > of server side frameworks to handle HTML generation and form processing. > This is all tooling that you get for free, and which you can't re-use with > a custom media type. > > Which does me one fat lot of good on server-to-server integration projects where the client is most decidedly *not* Javascript in a browser. And that's a very substantial portion of the world I live in. Yes, there are HTML processing client libraries in Java or Ruby or whatever. But agreeing on such low level syntax details is not an interesting problem to me, given that a client *still* has to have some deeper understanding of the semantics of the data they are presented with, in order to accomplish anything useful. For lovers of HTML/XHTML in particular though, how would you suggest representing a nested graph of objects? The closest thing to "real" HTML that I can think of would be nested lists or something, but all the <ul> and <li> elements are just noise compared to a raw XML or JSON data structure that expresses such relationships very naturally. (Yes, when I use generic data types, it's "application/json" or "application/xml"). Craig
Out of curiosity, has anyone considered proposal the use of the hreflang [1] attribute on links to supply this information? *12.1.5 **Internationalization and links* > Since links may point to documents encoded with different character > encodings <http://www.w3.org/TR/html4/charset.html#doc-char-set>, the A<http://www.w3.org/TR/html4/struct/links.html#edef-A> > and LINK <http://www.w3.org/TR/html4/struct/links.html#edef-LINK> elements > support the charset<http://www.w3.org/TR/html4/struct/links.html#adef-charset> attribute. > This attribute allows authors to advise user agents about the encoding of > data at the other end of the link. > The hreflang <http://www.w3.org/TR/html4/struct/links.html#adef-hreflang> > attribute provides user agents with information about the language of a > resource at the end of a link, just as the lang<http://www.w3.org/TR/html4/struct/dirlang.html#adef-lang> attribute > provides information about the language of an element's content or > attribute values. > Armed with this additional knowledge, user agents should be able to avoid > presenting "garbage" to the user. Instead, they may either locate resources > necessary for the correct presentation of the document or, if they cannot > locate the resources, they should at least warn the user that the document > will be unreadable and explain the cause. Whilst this would appear to traditionally about written languages such as english, german, or klingon - in a M2M world, could this not also refer to the domain language of the resource ( sales-request, purchase-order )? One problem I see with this is that the information is lost if you're starting point is the document, this could however to used along-side Accept-Language/Content-Language headers, altho the spec [2] specifically excludes "computer languages" ( which is a shame ). [1] http://www.w3.org/TR/html4/struct/links.html [2] http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.10 -- "Great artists are extremely selfish and arrogant things" — Steven Wilson, Porcupine Tree On Thu, Dec 29, 2011 at 1:21 PM, Paul Cohen <pacoispaco@...> wrote: > Maybe some syntax convention could be found for combining > representation format type *and* conceptual/business data type > information in the content-type header, eg > "application/json&purchase-order" or else a new HTTP header > "Concept-Type" could be introduced that indicates the > conceptual/business data type of the entity body, eg "Concept-Type: > purchase-order". This would at least give HTTP clients a *hint* of how > to interpret the entity body and "Concept-Type" names could also be > used in hyperlinks to give clients hints about the interpretation of a > resource before having to dereference the hyperlink to that resource. >
> > On Wed, Dec 28, 2011 at 9:51 AM, Erik Mogensen <erik@...> wrote: > >> I disagree. If you use HTML with extras (microformats, rdfa) you can use >> Chrome's inspector to debug. You can use any number of HTML parsers, you >> can use e.g. CSS selectors (and a CSS selector library) to identify nodes >> in the tree, you can interact with it on your iPad, you can use any number >> of server side frameworks to handle HTML generation and form processing. >> This is all tooling that you get for free, and which you can't re-use with >> a custom media type. >> >> > Which does me one fat lot of good on server-to-server integration projects > where the client is most decidedly *not* Javascript in a browser. And > that's a very substantial portion of the world I live in. > > And, by the way, when the client actually is a browser, the various debuggers know how to render a JSON object just fine. At the end of the day, they're just JavaScript objects after they get parsed for you by your Ajax layer. Craig
hello.
On 2011-12-28 16:21 , Paul Cohen wrote:
> Maybe some syntax convention could be found for combining
> representation format type *and* conceptual/business data type
> information in the content-type header, eg
> "application/json&purchase-order" or else a new HTTP header
> "Concept-Type" could be introduced that indicates the
> conceptual/business data type of the entity body, eg "Concept-Type:
> purchase-order". This would at least give HTTP clients a *hint* of how
> to interpret the entity body and "Concept-Type" names could also be
> used in hyperlinks to give clients hints about the interpretation of a
> resource before having to dereference the hyperlink to that resource.
that would make a lot of sense, because only then could you cleanly
communicate that you can, for example, provide the same conceptual
information in two different serializations, such as XML and JSON.
lumping the concept type into the serialization format is a hack, which
occasionally may be useful, really mixes two issues which should be
treated separately (and in many cases, the concept type might not even
be necessary, such as for the web's HTML pages where "page concepts"
sometimes are inferred by crawlers/indexers, but are never tagged
explicitly). the "application/...+xml" convention really is nothing but
engineering around that problem, using a convention that is not even
consistent across media types and only creates the illusion that you're
actually separating the data model and the serialization format.
cheers,
dret.
--
erik wilde | mailto:dret@... - tel:+1-510-2061079 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
Jorn: If i understand your post correctly, i think you an i have a very different understanding of the word "interoperability." so my question to you is this: what is it in Fielding's work that you claim results in no interoperability? IOW, point to Fielding's model, his list of system priorities, his list of constraints identified in order to induce those priorities, even some aspect of his REST example and/or his commentary on how his REST example varies from the existing HTTP spec and tell me what it is you find results in no interoperability. Note i am asking about how Fielding's REST results in no interop since that is that title of your post. if, however, you want to change the topic of discussion to talk about how other elements might be causing the "no interoperability" you believe exists (i.e. HTTP, MIME, selected media types, specific implementations, particular programming practices, tools, methodologies, etc.), feel free to do so. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Wed, Dec 28, 2011 at 18:52, Jørn Wildt <jw@...> wrote: > Allow me to try a debate that is a bit more opiniated than my normal posts: > REST will never get any break-through in M2M scenarios due to it's complete > lack of interoperability! [ducking for cover] > > Of course I have no proof for this - I don't even know how widespread it is > today ... so feel free to ignore me :-) > > Some will argue that we have the uniform interface - that's > interoperability! To some degree they would be right - in REST (HTTP) we > have uniform operations GET/POST etc. (verbs) and combine this with > resources (nouns) and get a uniform interface that everybody knows how to > use. > > What is missing in the equation is the formats we use to encode data. There > is not only a bunch of different media types - there is also different ways > to use those media types. That's where interoperability is missing in REST! > > As Eric puts it in > http://tech.groups.yahoo.com/group/rest-discuss/message/18159 : "even though > that isn't how _I_ build > systems" and Mike does it in > http://tech.groups.yahoo.com/group/rest-discuss/message/18202 : "i usually > use the Atom message as a "wrapper" for an embedded payload based on > XHTML" - exposing the problem with REST (at least in M2M scenarios) that > everybody simply does it their own way - even when sticking to the REST > architecture constraints. > > Some systems are going to use XHTML with RDFa while others will use XHTML + > Microformats, HAL, JSON, raw XML and so on. In the end developers will have > to handcraft parsers for each and every piece of data on the web - spoiling > the fun of having a uniform interface. > > Compare this to the SOAP world where there is no doubt about what format to > use: It is XML/SOAP, you get a WSDL, and from this you auto-generate code > and is up and running in a few minutes. There is exactly(*) one way to parse > the data and there is good tooling for it. In the beginning this was not so, > but then Microsoft, IBM and others sat down and laid out guidelines for > interoperability "WS-Interop" (http://en.wikipedia.org/wiki/WS-I), and > things got easier. > > Now, I don't want to turn this into a REST vs. SOAP debate, that's not the > point! What I am trying to say is that maybe REST is missing a set of > "RS-Interop" guidelines for M2M scenarios? > > Right now its like in the early days of SOAP where every vendor had their > own way of encoding data inside the SOAP envelope. I mean - the community > cannot even decide on whether minting new media types is a good or a bad > thing! And that's a key feature of REST! Come on - we can do better than > that! > > Neither am I touching upon the problem of semantically understanding the > data - that is a different problem that occurs after data has been extracted > from the response - and no interop profiles are going to solve that. I am > only focusing on getting the numbers, the strings, the dates, the classes > and so on out of the response. > > In the time I have spent trying to understand REST and decide on suitable > representations and media types for my work, I could have created five > different SOAP APIs of roughly the same complexity. You may call me stupid > for that, but at least I don't think I am alone. > > I fear that REST will be a fad - something (maybe not so) soon to be > forgotten - if nothing is done to make it easier to consume the data found > inside REST resources. Hopefully it won't be so - personally I love the > benefits that we get from REST, but something is missing for it to get its > real break-through. > > And, please tell me that I am wrong ... that would make REST more perfect > for me :-) > > Thanks for listening. > > /Jørn > > (*) And, yes, SOAP has its interoperability issues too - I know. > > > > ------------------------------------ > > Yahoo! Groups Links > > >
the notion of expressing a problem domain independent of the message format itself (via vocabularies, ontologies, etc) in a way that enables M2M communication and collaboration is a compelling one but, IMO, not easy to accomplish. i've taken a couple stabs at it over the last year and have yet to find my efforts successful. even i cases where i think it may be possible to create a "shared understanding" of the problem domain, i have yet to find a way to successfully map that understanding onto a media type in a way that is consistently consumable by generic clients. the idea of being able to map problem domain descriptions to *multiple* media types is something i've not seen yet anywhere (feel free to point me to examples anyone knows about). it is my suspicion that the work of SemWeb and REST have the potential to meet in a way that comes close to this goal; the ability enable M2M interactions to consistently share understanding about a problem domain independent of the format of the shared understanding of the message format. if anyone is working on such a project, or knows of one, please let me know. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Wed, Dec 28, 2011 at 19:51, Erik Wilde <dret@...u> wrote: > hello. > > On 2011-12-28 16:21 , Paul Cohen wrote: >> Maybe some syntax convention could be found for combining >> representation format type *and* conceptual/business data type >> information in the content-type header, eg >> "application/json&purchase-order" or else a new HTTP header >> "Concept-Type" could be introduced that indicates the >> conceptual/business data type of the entity body, eg "Concept-Type: >> purchase-order". This would at least give HTTP clients a *hint* of how >> to interpret the entity body and "Concept-Type" names could also be >> used in hyperlinks to give clients hints about the interpretation of a >> resource before having to dereference the hyperlink to that resource. > > that would make a lot of sense, because only then could you cleanly > communicate that you can, for example, provide the same conceptual > information in two different serializations, such as XML and JSON. > lumping the concept type into the serialization format is a hack, which > occasionally may be useful, really mixes two issues which should be > treated separately (and in many cases, the concept type might not even > be necessary, such as for the web's HTML pages where "page concepts" > sometimes are inferred by crawlers/indexers, but are never tagged > explicitly). the "application/...+xml" convention really is nothing but > engineering around that problem, using a convention that is not even > consistent across media types and only creates the illusion that you're > actually separating the data model and the serialization format. > > cheers, > > dret. > > -- > erik wilde | mailto:dret@... - tel:+1-510-2061079 | > | UC Berkeley - School of Information (ISchool) | > | http://dret.net/netdret http://twitter.com/dret | > > > ------------------------------------ > > Yahoo! Groups Links > > >
hello mike.
On 2011-12-28 18:45 , mike amundsen wrote:
> even i cases where i think it may be possible to create a "shared
> understanding" of the problem domain, i have yet to find a way to
> successfully map that understanding onto a media type in a way that
> is consistently consumable by generic clients. the idea of being able
> to map problem domain descriptions to *multiple* media types is
> something i've not seen yet anywhere (feel free to point me to
> examples anyone knows about).
skipping over the other interesting questions you were asking: just look
at mobile web sites and how web servers adapt to mapping the same
problem domain ("getting the web site contents to a browser") to
different media types, sometimes just as variations of text/html, but
sometimes also as text/vnd.wap.wml, making sure that WAP browsers can be
used as well. some web pages even map their problem domain to
application/pdf, allowing paginated paper-oriented clients to understand it.
another example from the m2m world is any service that provides JSON and
XML access. sometimes these may be functionally different, because of
different assumptions about the "typical client" for that media types,
but often they are really just different representations, and RDF often
is thrown into the mix as well (some services even throw in CSV for
spreadsheet aficionados, but that often loses some of the expressiveness
of the domain model because of its simplicity).
cheers,
dret.
--
erik wilde | mailto:dret@... - tel:+1-510-2061079 |
| UC Berkeley - School of Information (ISchool) |
| http://dret.net/netdret http://twitter.com/dret |
> > That's where interoperability is missing in REST! > > This has nothing to do with REST. REST is concerned with the > communication between components on the network. Exactly. REST gives you nothing in terms of what to do with the data once it has been communicated from the server to the client. Neither is that the purpose of REST - I am not saying that REST is wrong, I am not saying it should be changed. Roy is rather precise on what REST is - I am not questioning that! What I am looking for is a set of guidelines that would work on the level above REST, giving people best practices for making APIs that are 1) REST, and 2) agrees upon very few ways to work with the data. As it is today, people (okay, me at least) turns to REST for all the goodness it is supposed to bring. Only to discover that once they have decided on their resources they are left without guidelines for how to encode them. That's not part of REST, and it should not be part of REST. But the end result is that we get a myriade of different encoding solutions - all trying to map their specific problem domain to a media type, and doing it their own way. So, again: maybe its time for a REST-Interop guide that speaks about how to represent specific problem domains for M2M scenarios. /J�rn
> so my question to you is this: > what is it in Fielding's work that you claim results in no interoperability? Well, the stuff that is not there :-) Hopefully this post explains my issue a bit better: http://tech.groups.yahoo.com/group/rest-discuss/message/18214 > if, however, you want to change the > topic of discussion to talk about how > other elements might be causing > the "no interoperability" you believe exists (i.e. HTTP, MIME, > selected media types, specific implementations, particular programming > practices, tools, methodologies, etc.), feel free to do so. Okay. Done. /J�rn
Hiya,
"Jørn Wildt" <jw@...> wrote:
> As it is today, people (okay, me at least) turns to REST for all the
> goodness it is supposed to bring. Only to discover that once they have
> decided on their resources they are left without guidelines for how to
> encode them.
Ah, spotted a mistake. :) HATEOS (the very thing even Roy is missing in
most these debates) states that you don't choose your resources so much as
to decouple that finite state from your system. Again, try to decouple your
system from static URIs. Anyway, this is not the main concern you have here.
> That's not part of REST, and it should not be part of REST. But
> the end result is that we get a myriade of different encoding solutions -
> all trying to map their specific problem domain to a media type, and
doing
> it their own way.
I often point out that there are hundreds of levels of abstration and
modelling iin any given solution, REST or otherwise, and that it is we, the
architecture community, that needs to be better at modelling our data
better, and through this we will have less of this interoperability issues.
However, data modelling (and no, not in the RDBMS sense) is a mostly lost
art. *sigh*
> So, again: maybe its time for a REST-Interop guide that speaks about how
to
> represent specific problem domains for M2M scenarios.
There's no such thing as a REST interoperability level. What you seek is
some other layer which you can easily, and hopefully without ambiguity,
layer on top of REST, and to that effect I can symphetize with your
frustration.
Here's what I do. Basic XHTML forms for application interaction (forms with
Qnames), and data as embedded RDFa in, yup, more XHTML. the thing is, a
deep dark secret of things like SOAP is that it's an envelope with a header
and body ... just like HTML.
So, I simply use XHTML as the carrier (because of all the extra free stuff
that comes with it, tooling, understanding, reuse, etc.) and application
interaction, and use specific ontologies for interoperability. Examples of
ontologies are FOAF, however this is a dreadful yet simple ontology. For
serious stuff I either a) make my own, or b) use some of the many out there
(look for RDF vocabularies). For my own framework I've made an ontology
called NUT, and it will do the stuff my framework does, and nothing more.
If I need something else on top, I can use a mixed model and bake other
ontologies into it.
<html>
<header>
<title>Ask for a quote</title>
</header>
<body>
<div class="nut:quote">
<input name="nut:quantity">20</input>
<input name="nut:ref-id">ff495</input>
</div>
</body>
</html>
If your system understand the ontology of NUT, they can process, apply, and
respond with a XHTML snippet with more embedded NUT in it. It's all in
XHTML, so if you want to push that to a browser, it's easy peasy.
In other words, REST don't care about interoperability, only vocabularies
do. This is also know as the ontology layer.
Hope that helps.
Kind regards,
Alex
(sent from my lovely ASUS Transformer Tablet)
> I often point out that there are hundreds of levels of abstration and > modelling iin any given solution, REST or otherwise Agreed. It is not really an issue with REST - but an issue with modelling on top of REST, and thus in the context of REST. > There's no such thing as a REST interoperability level Then lets call it something different. The OnTopOfREST-Interop guide. That's not important to me. > What you seek is some other layer which you can easily, > and hopefully without ambiguity, layer on top of REST, > and to that effect I can symphetize with your frustration. Thanks :-) > Here's what I do. Basic XHTML forms ... Seems like XHTML could be a good starting point for some kind of interop-guideline (for all the good reasons you, Eric and others mention). There's quite a few people that swings that way. /J�rn
On 29 December 2011 07:45, Jørn Wildt <jw@...> wrote: > ** > > > > > That's where interoperability is missing in REST! > > > > This has nothing to do with REST. REST is concerned with the > > communication between components on the network. > > Exactly. REST gives you nothing in terms of what to do with the data once > it > has been communicated from the server to the client. Neither is that the > purpose of REST - I am not saying that REST is wrong, I am not saying it > should be changed. Roy is rather precise on what REST is - I am not > questioning that! What I am looking for is a set of guidelines that would > work on the level above REST, giving people best practices for making APIs > that are 1) REST, and 2) agrees upon very few ways to work with the data. > > As it is today, people (okay, me at least) turns to REST for all the > goodness it is supposed to bring. Only to discover that once they have > decided on their resources they are left without guidelines for how to > encode them. That's not part of REST, and it should not be part of REST. > But > the end result is that we get a myriade of different encoding solutions - > all trying to map their specific problem domain to a media type, and doing > it their own way. > > So, again: maybe its time for a REST-Interop guide that speaks about how > to > represent specific problem domains for M2M scenarios. > > Having followed the discussion so far, it is quite amazing no one mentioned RDF (not RDFa embedded in HTML), especially in the context it gives a natural way to represent both links and content. Would anyone consider these ideas by IBM as a guideline http://www.ibm.com/developerworks/rational/library/basic-profile-linked-data/index.html ? Best regards, Nina Jeliazkova > /Jørn > > >
> it is my suspicion that the work of SemWeb and REST have the potential > to meet in a way that comes close to this goal; the ability enable M2M > interactions to consistently share understanding about a problem > domain independent of the format of the shared understanding of the > message format. Yup. +1 /J�rn
On Dec 29, 2011, at 1:21 AM, Paul Cohen wrote: > or else a new HTTP header > "Concept-Type" could be introduced that indicates the > conceptual/business data type of the entity body, eg "Concept-Type: > purchase-order". This would at least give HTTP clients a *hint* of how > to interpret the entity body That 'hint' already exists and is called "Content-Type". Jan
Just for then fun of it: IANA does in fact accept some rather specific media types .. http://www.iana.org/assignments/media-types/application/vnd.chipnuts.karaoke-mmd - "This content type is used for enabling karaoke function in those mobile handsets where Chipnuts multimedia chipset is built in" http://www.iana.org/assignments/media-types/application/vnd.cinderella - "This mime type shall be used to identify data files for the interactive Geometry software Cinderella." http://www.iana.org/assignments/media-types/application/vnd.intu.qbo - "This type is intended for use only with QuickBooks 6.0 (Canada)." Don't know if this is good or bad ... /Jørn
Hi Mike, On 12/29/2011 3:45 AM, mike amundsen wrote: > the notion of expressing a problem domain independent of the message > format itself (via vocabularies, ontologies, etc) in a way that > enables M2M communication and collaboration is a compelling one but, > IMO, not easy to accomplish. i've taken a couple stabs at it over the > last year and have yet to find my efforts successful. > > even i cases where i think it may be possible to create a "shared > understanding" of the problem domain, i have yet to find a way to > successfully map that understanding onto a media type in a way that > is consistently consumable by generic clients. the idea of being able > to map problem domain descriptions to *multiple* media types is > something i've not seen yet anywhere (feel free to point me to > examples anyone knows about). > > it is my suspicion that the work of SemWeb and REST have the potential > to meet in a way that comes close to this goal; the ability enable M2M > interactions to consistently share understanding about a problem > domain independent of the format of the shared understanding of the > message format. > > if anyone is working on such a project, or knows of one, please let me know. from my point of view, the answer still is the knowledge representation language RDF (incl. (optionally) RDFS, OWL, ...), which can be serialized into various representation formats, e.g., Turtle, RDFa, RDF/JSON, RDF/XML (Microdata is a "yet not another"-approach with not many improvements (from my POV)). (Of course,) The Hypermedia/HATEOAS definitions must be part of the representation format media type. (as some of you may remember ... :) ) I tried to discuss and investigate the relation between Linked Data, Semantic Web technologies and REST for a while (see, e.g., [1] and [2]). Cheers, Bo [1] http://answers.semanticweb.com/questions/2763/the-relation-of-linked-datasemantic-web-to-rest [2] http://smiy.org/2011/02/17/a-generalisation-of-the-linked-data-publishing-guideline/
On Dec 29, 2011, at 8:19 AM, Jan Algermissen wrote: > > On Dec 29, 2011, at 1:21 AM, Paul Cohen wrote: > > > or else a new HTTP header > > "Concept-Type" could be introduced that indicates the > > conceptual/business data type of the entity body, eg "Concept-Type: > > purchase-order". This would at least give HTTP clients a *hint* of how > > to interpret the entity body > > That 'hint' already exists and is called "Content-Type". Consider, for example, application/atom+xml. This media type has two root level document types, feeds and entries. There is no need to distinguish between the two at the Content-Type header level because the user agent simply reacts on what it receives. If you were doing procurement, some application/procurement media type would do. There can be <offer>,<order>,<invoice>, <creditNote>,... in that media type without any need for further hints than Content-Type: application/procurement. Jan > > Jan > >
On Thu, Dec 29, 2011 at 12:11 AM, Jan Algermissen < jan.algermissen@...> wrote: > > On Dec 29, 2011, at 8:19 AM, Jan Algermissen wrote: > > > > > On Dec 29, 2011, at 1:21 AM, Paul Cohen wrote: > > > > > or else a new HTTP header > > > "Concept-Type" could be introduced that indicates the > > > conceptual/business data type of the entity body, eg "Concept-Type: > > > purchase-order". This would at least give HTTP clients a *hint* of how > > > to interpret the entity body > > > > That 'hint' already exists and is called "Content-Type". > > > Consider, for example, application/atom+xml. This media type has two root > level document types, feeds and entries. There is no need to distinguish > between the two at the Content-Type header level because the user agent > simply reacts on what it receives. > > So, please explain for me again why application/atom+xml is better than application/procurement for this? Yes, I can understand (at a syntactic level) that I might receive a "link" element with a "rel" value of "checkout" and a corresponding URL. But what the heck does a "rel" value of "checkout" *mean* in terms of what the client app should do next? Not negotiating content types at the HTTP media type level means I just have to negotiate them at some lower level (after I understand the syntax). That's not an improvement in interop ... that's just sweeping an inconvenient problem under the carpet. > If you were doing procurement, some application/procurement media type > would do. There can be <offer>,<order>,<invoice>, <creditNote>,... in that > media type without any need for further hints than Content-Type: > application/procurement. > > Exactly the problem ... the client *still* needs to understand what <offer>, <order>, <invoice>, and <creditNote> *mean*. Why is it so cool to bury this fact in two layers of negotiation (say "text/xhtml" plus a particular microformat) than one? > Jan > Craig
On Dec 29, 2011, at 9:39 AM, Craig McClanahan wrote: > On Thu, Dec 29, 2011 at 12:11 AM, Jan Algermissen <jan.algermissen@...> wrote: > > On Dec 29, 2011, at 8:19 AM, Jan Algermissen wrote: > > > > > On Dec 29, 2011, at 1:21 AM, Paul Cohen wrote: > > > > > or else a new HTTP header > > > "Concept-Type" could be introduced that indicates the > > > conceptual/business data type of the entity body, eg "Concept-Type: > > > purchase-order". This would at least give HTTP clients a *hint* of how > > > to interpret the entity body > > > > That 'hint' already exists and is called "Content-Type". > > > Consider, for example, application/atom+xml. This media type has two root level document types, feeds and entries. There is no need to distinguish between the two at the Content-Type header level because the user agent simply reacts on what it receives. > > So, please explain for me again why application/atom+xml is better than application/procurement for this? What is 'this'? Atom is only good for the specific kind of communication it is designed for - if your intention is to do procurement, define a specific media type for the communication of that 'domain'. > > Yes, I can understand (at a syntactic level) that I might receive a "link" element with a "rel" value of "checkout" and a corresponding URL. But what the heck does a "rel" value of "checkout" *mean* in terms of what the client app should do next? The user agent needs to be hard-coded (or configured) to understand what to do when it sees a rel="checkout". REST does not provide (and neither intends to do so) a magic means of removing inherent requirements of communication. > > Not negotiating content types at the HTTP media type level means I just have to negotiate them at some lower level (after I understand the syntax). That's not an improvement in interop ... that's just sweeping an inconvenient problem under the carpet. Can you tell me who brought up the idea that REST user agents would suddenly understand stuff they are not programmed to understand? > > If you were doing procurement, some application/procurement media type would do. There can be <offer>,<order>,<invoice>, <creditNote>,... in that media type without any need for further hints than Content-Type: application/procurement. > > Exactly the problem ... the client *still* needs to understand what <offer>, <order>, <invoice>, and <creditNote> *mean*. Why is it so cool to bury this fact in two layers of negotiation (say "text/xhtml" plus a particular microformat) than one? It is not cool. It is wrong. The media type is the one layer that carries the necessary information. (And sometimes we find link relations that make sense to define in an orthogonal way (IANA link rels) to media types in order to facilitate re-use). Jan > > Jan > > Craig >
> > If you were doing procurement, some application/procurement media type > > would do. There can be <offer>,<order>,<invoice>, <creditNote>,... in that > > media type without any need for further hints than Content-Type: > > application/procurement. > > > > Exactly the problem ... the client *still* needs to understand what > <offer>, <order>, <invoice>, and <creditNote> *mean*. > Why is it so cool to > bury this fact in two layers of negotiation (say "text/xhtml" plus a > particular microformat) than one? Because (quoting http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven) "REST is software design on the scale of decades" and: "What REST does is concentrate that need for prior knowledge into readily standardizable forms [...] It has value because it is far easier to standardize representation and relation types than it is to standardize objects and object-specific interfaces. In other words, there are fewer things to learn and they can be recombined in unanticipated ways while remaining understandable to the client." I am quite sure that the description of offers, quotes and so on varies with the market/geographical-region/politics/time and other factors, whereas XML and JSON won't. So XML/JSON are good candidates for media types whereas offers/quotes/orders are not. Unfortunately JSON/XML carry no hypermedia semantics so XHTML and RDF[a] are probably better examples. But having only these "stupid", non-specific, formats leaves us with a need for specifying how to interpret them in a domain specific manner - what they *mean*. This is something that must be added on top of REST. Thus we get "two layers of negotiation": one at the level of REST (the format of the data - XML/JSON/RDF/XHTML/CSV/etc.) and one at a level higher up (the meaning of the data - the ontology, or semantical interpretation of the data). At the lower REST level it becomes a long lived application where we can expect the formats to be readable over the decades. At the upper level we need to upgrade our agents over time as the problem domain evolves. (If you haven't noticed it then I am moving in the direction of the do-not-mint-new-media-types camp). /Jørn --- In rest-discuss@yahoogroups.com, Craig McClanahan <craigmcc@...> wrote: > > On Thu, Dec 29, 2011 at 12:11 AM, Jan Algermissen < > jan.algermissen@...> wrote: > > > > > On Dec 29, 2011, at 8:19 AM, Jan Algermissen wrote: > > > > > > > > On Dec 29, 2011, at 1:21 AM, Paul Cohen wrote: > > > > > > > or else a new HTTP header > > > > "Concept-Type" could be introduced that indicates the > > > > conceptual/business data type of the entity body, eg "Concept-Type: > > > > purchase-order". This would at least give HTTP clients a *hint* of how > > > > to interpret the entity body > > > > > > That 'hint' already exists and is called "Content-Type". > > > > > > Consider, for example, application/atom+xml. This media type has two root > > level document types, feeds and entries. There is no need to distinguish > > between the two at the Content-Type header level because the user agent > > simply reacts on what it receives. > > > > So, please explain for me again why application/atom+xml is better than > application/procurement for this? > > Yes, I can understand (at a syntactic level) that I might receive a "link" > element with a "rel" value of "checkout" and a corresponding URL. But what > the heck does a "rel" value of "checkout" *mean* in terms of what the > client app should do next? > > Not negotiating content types at the HTTP media type level means I just > have to negotiate them at some lower level (after I understand the syntax). > That's not an improvement in interop ... that's just sweeping an > inconvenient problem under the carpet. > > > > If you were doing procurement, some application/procurement media type > > would do. There can be <offer>,<order>,<invoice>, <creditNote>,... in that > > media type without any need for further hints than Content-Type: > > application/procurement. > > > > Exactly the problem ... the client *still* needs to understand what > <offer>, <order>, <invoice>, and <creditNote> *mean*. Why is it so cool to > bury this fact in two layers of negotiation (say "text/xhtml" plus a > particular microformat) than one? > > > > Jan > > > > Craig >
> Thus we get "two layers of negotiation": one at the level of REST (the format of the data - XML/JSON/RDF/XHTML/CSV/etc.) and one at a level higher up (the meaning of the data - the ontology, or semantical interpretation of the data). Actually, we might even get three layers in some cases: 1: choice of format (XML/JSON/XHTML/etc) 2: choice of encoding for some formats (XHTML + RDFa or HTML + microformat) 3: choice of semantics (XHTML + RDFa + Ontology) /Jørn
"Jan Algermissen" <jan.algermissen@...> wrote: > The media type is the one layer that carries the necessary information. > (And sometimes we find link relations that make sense to define in an > orthogonal way (IANA link rels) to media types in order to facilitate re-use). Indeed. I think there is a lot of confusion about what layer of the cake does what. Here's the basic three things my own applications (client AND server) look for ; 1. Content-type : We know this specific type, so let's process that 2. content snooping in known generic formats ([X]HTML, ATOM, JSON, etc.) : We know this generic type, so let's see if we can find stuff inside it we also know about (RDFa, microformats, xsn, etc.) 3. Error 415. The reason for wanting 1 the most is for performance and optimization (including less code to process the data). Number 2 is second best, but require more plumbing on the application's side (parsing data for namespaces and classes and such is just more accident prone and a resource hog compared). The advantage to 2 is the mixed content model if your don't have everything you need in 1 (which also isn't extendable). Both of these are fine, of course, but do realize that this ain't anything to do with REST as such (except content type recognition), but it might help with the confusion. :) Regards, Alex (written on my lovely Asus Transformer tablet)
On Thu, Dec 29, 2011 at 8:19 AM, Jan Algermissen <jan.algermissen@...m> wrote: > > On Dec 29, 2011, at 1:21 AM, Paul Cohen wrote: > >> or else a new HTTP header >> "Concept-Type" could be introduced that indicates the >> conceptual/business data type of the entity body, eg "Concept-Type: >> purchase-order". This would at least give HTTP clients a *hint* of how >> to interpret the entity body > > That 'hint' already exists and is called "Content-Type". Yes. I agree that the Content-Type header *is* used for the mixed purpose of declaring both format types and conceptual types. However my understanding of Media Type as defined in RFC 2046 (http://tools.ietf.org/html/rfc2046) is that it is to be used to identify media format types or representation format types, and not conceptual types. The Media Type spec defines the use of a two level categorization hierarchy; "top level type" and "subtype". Both of these category levels are to be used for naming media format types. The fact that a) the media type spec does not specify how conceptual types are to be identified and the fact that b) people have the need to communicate information on conceptual types led to the convention of defining media types with names like "application/*+xml", where * is replaced with the name of some conceptual type. The rationale for the "+xml" media type naming convention is found as an appendix to the XML Media Types Spec RFC 3023 (http://www.ietf.org/rfc/rfc3023.txt); "Appendix A. Why Use the '+xml' Suffix for XML-Based MIME Types?". To me it is a hack (as was also noted by Erik Wilde earlier in this thread). I can appreciate the rationale for interoperability and backwards compatibility that lead to the "+xml" convention and I'm not categorically against the convention of using the Content-Type header to specify both format *and* conceptual type information. But I think this convention it is one reason for why people are uncertain on how and where to handle format and conceptual type information. For example there exists a media type "application/calendar+xml" that has an associated RFC specification 6321 (http://www.rfc-editor.org/rfc/rfc6321.txt). To me that is mixing the format type (xml) with the conceptual type (calender; or actually an iCalendar). Basically it is like saying "here is a resource representing a calendar in the representation format of xml". What are they to do when the need for a different representation format of a calendar arises? Say for example JSON or PDF? Introduce new media types "application/calendar+json" and "application/calendar+pdf"? Or application/calendar+asn.1.BER? What if the need for an image-based or audio-based representation of the calendar arises? An obvious risk with the convention is that it will lead to a over-proliferation of media types, since a new media type is needed for each combination of format type and conceptual type. There are already nearly 300 IANA registered mediatypes with "+xml" names. And we are only starting to work on more complex "business" or "conceptual" media types! I think the web would be better served by support for a clear distinction between format types and conceptual types. I think it also would help people designing HTTP-based REST API:s. The concept that a resource represents and the actual representation formats that are available for that resource are two different things. /Paul -- Paul Cohen www.seibostudios.se mobile: +46 730 787 035 e-mail: paul.cohen@...
On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: > > However my understanding of Media Type as defined in RFC 2046 > (http://tools.ietf.org/html/rfc2046) is that it is to be used to > identify media format types or representation format types, and not > conceptual types. There simply is no notion of 'conceptual type' in REST. All this does is confusing the matter. Servers can pick the representation (which media type to use and how to maybe adjust the entity[1]) based on various request headers. If the existing request headers (Accept-*) are not sufficient then define a sufficient media type. Let me say it again: the problem that is being tried to solve does not exist. Jan [1] E.g. send this HTML to Firefox and that HTML to IE
On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: > format types Media types not only specify syntax, they also specify intended processing. E.g. application/atom+xml does not only refer to the schema, it also refers to all the stuff written in that spec. application/atomsrv+xml not only defines the <service> schema, it also defines a whole bunch of expectations for clients and servers. If you say: Accept: application/atomsrv+xml you say a whole lot more than "I am able to parse <service> documents". You are saying: "I can carry on with my realized use case if you answer me in application/atomsrv+xml" Likewise, if you receive: 200 Ok Content-Type: application/atomsrv+xml <service>...</service> there is vastly more stuff you as a client can assume besides "This entity will come in that schema". You will, for example, know that any <collection href="/foo"> points to an AtomPub collection and that there is a bunch of stuff you likely can do with it. Syntax stuff, we could easily do with application/xml + a DTD or schema. But the media type is *much*, *much* more than syntax. Jan
when working w/ HTTP and MIME Media Types, i am essentially "Programming with Media Types." the media type is the "heart" of the implementations i build; the media type is not just a data payload, but is the primary way client and server "share understanding" about the problem domain. that includes not just data in the problem domain, but actions, too. i write clients that recognize, parse, and (when appropriate) render and/or activate the hypermedia that appears in the response representation. i write servers that accept requests, parse the address and/or body and then, based on the request details, access private objects, entities, storage, etc. that contain problem domain information, map that domain information onto the requested media type (including hypermedia to express possible options) and, finally returns that message as a response representation to the client. mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Thu, Dec 29, 2011 at 10:24, Jan Algermissen <jan.algermissen@nordsc.com> wrote: > > On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: > >> format types > > Media types not only specify syntax, they also specify intended processing. E.g. application/atom+xml does not only refer to the schema, it also refers to all the stuff written in that spec. application/atomsrv+xml not only defines the <service> schema, it also defines a whole bunch of expectations for clients and servers. > > If you say: > > Accept: application/atomsrv+xml > > you say a whole lot more than "I am able to parse <service> documents". You are saying: "I can carry on with my realized use case if you answer me in application/atomsrv+xml" > > Likewise, if you receive: > > 200 Ok > Content-Type: application/atomsrv+xml > > <service>...</service> > > there is vastly more stuff you as a client can assume besides "This entity will come in that schema". You will, for example, know that any <collection href="/foo"> points to an AtomPub collection and that there is a bunch of stuff you likely can do with it. > > Syntax stuff, we could easily do with application/xml + a DTD or schema. But the media type is *much*, *much* more than syntax. > > Jan > > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Dec 29, 2011, at 4:51 PM, mike amundsen wrote: > when working w/ HTTP and MIME Media Types, i am essentially > "Programming with Media Types." the media type is the "heart" of the > implementations i build; the media type is not just a data payload, > but is the primary way client and server "share understanding" about > the problem domain. that includes not just data in the problem domain, > but actions, too. > i write clients that recognize, parse, and (when appropriate) render > and/or activate the hypermedia that appears in the response > representation. Yes, this is a nice description. > > i write servers that accept requests, parse the address and/or body > and then, based on the request details, access private objects, > entities, storage, etc. that contain problem domain information, map > that domain information onto the requested media type (including > hypermedia to express possible options) and, finally returns that > message as a response representation to the client. Yep. Jan > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > > > On Thu, Dec 29, 2011 at 10:24, Jan Algermissen > <jan.algermissen@nordsc.com> wrote: >> >> On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: >> >>> format types >> >> Media types not only specify syntax, they also specify intended processing. E.g. application/atom+xml does not only refer to the schema, it also refers to all the stuff written in that spec. application/atomsrv+xml not only defines the <service> schema, it also defines a whole bunch of expectations for clients and servers. >> >> If you say: >> >> Accept: application/atomsrv+xml >> >> you say a whole lot more than "I am able to parse <service> documents". You are saying: "I can carry on with my realized use case if you answer me in application/atomsrv+xml" >> >> Likewise, if you receive: >> >> 200 Ok >> Content-Type: application/atomsrv+xml >> >> <service>...</service> >> >> there is vastly more stuff you as a client can assume besides "This entity will come in that schema". You will, for example, know that any <collection href="/foo"> points to an AtomPub collection and that there is a bunch of stuff you likely can do with it. >> >> Syntax stuff, we could easily do with application/xml + a DTD or schema. But the media type is *much*, *much* more than syntax. >> >> Jan >> >> >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >>
On Thu, Dec 29, 2011 at 3:24 PM, Jan Algermissen <jan.algermissen@... > wrote: > ** > > > > On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: > > > format types > > Media types not only specify syntax, they also specify intended > processing. E.g. application/atom+xml does not only refer to the schema, it > also refers to all the stuff written in that spec. application/atomsrv+xml > not only defines the <service> schema, it also defines a whole bunch of > expectations for clients and servers. > > If you say: > > Accept: application/atomsrv+xml > > you say a whole lot more than "I am able to parse <service> documents". > You are saying: "I can carry on with my realized use case if you answer me > in application/atomsrv+xml" > What does this buy you vs Accepting something generic like hal or even just plain xml? > > Likewise, if you receive: > > 200 Ok > Content-Type: application/atomsrv+xml > > <service>...</service> > > there is vastly more stuff you as a client can assume besides "This entity > will come in that schema". You will, for example, know that any <collection > href="/foo"> points to an AtomPub collection and that there is a bunch of > stuff you likely can do with it. > > Those additional assumptions can instead be made by understanding the link which led the client there, which should be the case for any resource except entry points. Exposing an app this way coaxes clients into traversing your application properly (out from entry points by following links), and it implies to consumers of your app that the representation's purpose and structure are impermanent - both of these are important implications if you want to foster a non-brittle client base which will better survive evolutionary changes in your application. Cheers, Mike
On Dec 29, 2011, at 6:13 PM, Mike Kelly wrote: > > > On Thu, Dec 29, 2011 at 3:24 PM, Jan Algermissen <jan.algermissen@...> wrote: > > > > On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: > > > format types > > Media types not only specify syntax, they also specify intended processing. E.g. application/atom+xml does not only refer to the schema, it also refers to all the stuff written in that spec. application/atomsrv+xml not only defines the <service> schema, it also defines a whole bunch of expectations for clients and servers. > > If you say: > > Accept: application/atomsrv+xml > > you say a whole lot more than "I am able to parse <service> documents". You are saying: "I can carry on with my realized use case if you answer me in application/atomsrv+xml" > > What does this buy you vs Accepting something generic like hal or even just plain xml? If I say: Content-Type: application/xml What did I tell you? Jan >
On Thu, Dec 29, 2011 at 5:14 PM, Jan Algermissen <jan.algermissen@... > wrote: > ** > > > > On Dec 29, 2011, at 6:13 PM, Mike Kelly wrote: > > > > > > > On Thu, Dec 29, 2011 at 3:24 PM, Jan Algermissen < > jan.algermissen@...> wrote: > > > > > > > > On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: > > > > > format types > > > > Media types not only specify syntax, they also specify intended > processing. E.g. application/atom+xml does not only refer to the schema, it > also refers to all the stuff written in that spec. application/atomsrv+xml > not only defines the <service> schema, it also defines a whole bunch of > expectations for clients and servers. > > > > If you say: > > > > Accept: application/atomsrv+xml > > > > you say a whole lot more than "I am able to parse <service> documents". > You are saying: "I can carry on with my realized use case if you answer me > in application/atomsrv+xml" > > > > What does this buy you vs Accepting something generic like hal or even > just plain xml? > > If I say: > > Content-Type: application/xml > > What did I tell you? > > Jan > This comes back to my original question: I already know I'm a service resource - what did you *not* tell me that I needed to know? Cheers, Mike
How does the client know how to handle the response body? Are you saying the link documentation tells them here is what you should expect? Sent from my Windows Phone ------------------------------ From: Mike Kelly Sent: 12/29/2011 9:13 AM To: Jan Algermissen Cc: Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; craigmcc@... Subject: Re: [rest-discuss] The "new media types are evil" meme On Thu, Dec 29, 2011 at 3:24 PM, Jan Algermissen <jan.algermissen@... > wrote: > ** > > > > On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: > > > format types > > Media types not only specify syntax, they also specify intended > processing. E.g. application/atom+xml does not only refer to the schema, it > also refers to all the stuff written in that spec. application/atomsrv+xml > not only defines the <service> schema, it also defines a whole bunch of > expectations for clients and servers. > > If you say: > > Accept: application/atomsrv+xml > > you say a whole lot more than "I am able to parse <service> documents". > You are saying: "I can carry on with my realized use case if you answer me > in application/atomsrv+xml" > What does this buy you vs Accepting something generic like hal or even just plain xml? > > Likewise, if you receive: > > 200 Ok > Content-Type: application/atomsrv+xml > > <service>...</service> > > there is vastly more stuff you as a client can assume besides "This entity > will come in that schema". You will, for example, know that any <collection > href="/foo"> points to an AtomPub collection and that there is a bunch of > stuff you likely can do with it. > > Those additional assumptions can instead be made by understanding the link which led the client there, which should be the case for any resource except entry points. Exposing an app this way coaxes clients into traversing your application properly (out from entry points by following links), and it implies to consumers of your app that the representation's purpose and structure are impermanent - both of these are important implications if you want to foster a non-brittle client base which will better survive evolutionary changes in your application. Cheers, Mike
Yes, the link relation of the referring link should be documented so the client can interpret the targeted resource. Is that controversial? :) On Thu, Dec 29, 2011 at 7:19 PM, Glenn Block <glenn.block@...> wrote: > ** > > > How does the client know how to handle the response body? Are you saying > the link documentation tells them here is what you should expect? > > Sent from my Windows Phone > ------------------------------ > From: Mike Kelly > Sent: 12/29/2011 9:13 AM > To: Jan Algermissen > Cc: Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; > craigmcc@... > Subject: Re: [rest-discuss] The "new media types are evil" meme > > > > > > On Thu, Dec 29, 2011 at 3:24 PM, Jan Algermissen < > jan.algermissen@...> wrote: > >> ** >> >> >> >> On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: >> >> > format types >> >> Media types not only specify syntax, they also specify intended >> processing. E.g. application/atom+xml does not only refer to the schema, it >> also refers to all the stuff written in that spec. application/atomsrv+xml >> not only defines the <service> schema, it also defines a whole bunch of >> expectations for clients and servers. >> >> If you say: >> >> Accept: application/atomsrv+xml >> >> you say a whole lot more than "I am able to parse <service> documents". >> You are saying: "I can carry on with my realized use case if you answer me >> in application/atomsrv+xml" >> > > What does this buy you vs Accepting something generic like hal or even > just plain xml? > > >> >> Likewise, if you receive: >> >> 200 Ok >> Content-Type: application/atomsrv+xml >> >> <service>...</service> >> >> there is vastly more stuff you as a client can assume besides "This >> entity will come in that schema". You will, for example, know that any >> <collection href="/foo"> points to an AtomPub collection and that there is >> a bunch of stuff you likely can do with it. >> >> > Those additional assumptions can instead be made by understanding the link > which led the client there, which should be the case for any resource > except entry points. Exposing an app this way coaxes clients into > traversing your application properly (out from entry points by following > links), and it implies to consumers of your app that the representation's > purpose and structure are impermanent - both of these are important > implications if you want to foster a non-brittle client base which will > better survive evolutionary changes in your application. > > Cheers, > Mike > > >
Where are you documenting the links? Are you saying no new media type, but link relations are registered? Sent from my Windows Phone ------------------------------ From: Mike Kelly Sent: 12/29/2011 11:37 AM To: Glenn Block Cc: Jan Algermissen; Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; craigmcc@... Subject: Re: [rest-discuss] The "new media types are evil" meme Yes, the link relation of the referring link should be documented so the client can interpret the targeted resource. Is that controversial? :) On Thu, Dec 29, 2011 at 7:19 PM, Glenn Block <glenn.block@...> wrote: > ** > > > How does the client know how to handle the response body? Are you saying > the link documentation tells them here is what you should expect? > > Sent from my Windows Phone > ------------------------------ > From: Mike Kelly > Sent: 12/29/2011 9:13 AM > To: Jan Algermissen > Cc: Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; > craigmcc@gmail.com > Subject: Re: [rest-discuss] The "new media types are evil" meme > > > > > > On Thu, Dec 29, 2011 at 3:24 PM, Jan Algermissen < > jan.algermissen@nordsc.com> wrote: > >> ** >> >> >> >> On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: >> >> > format types >> >> Media types not only specify syntax, they also specify intended >> processing. E.g. application/atom+xml does not only refer to the schema, it >> also refers to all the stuff written in that spec. application/atomsrv+xml >> not only defines the <service> schema, it also defines a whole bunch of >> expectations for clients and servers. >> >> If you say: >> >> Accept: application/atomsrv+xml >> >> you say a whole lot more than "I am able to parse <service> documents". >> You are saying: "I can carry on with my realized use case if you answer me >> in application/atomsrv+xml" >> > > What does this buy you vs Accepting something generic like hal or even > just plain xml? > > >> >> Likewise, if you receive: >> >> 200 Ok >> Content-Type: application/atomsrv+xml >> >> <service>...</service> >> >> there is vastly more stuff you as a client can assume besides "This >> entity will come in that schema". You will, for example, know that any >> <collection href="/foo"> points to an AtomPub collection and that there is >> a bunch of stuff you likely can do with it. >> >> > Those additional assumptions can instead be made by understanding the link > which led the client there, which should be the case for any resource > except entry points. Exposing an app this way coaxes clients into > traversing your application properly (out from entry points by following > links), and it implies to consumers of your app that the representation's > purpose and structure are impermanent - both of these are important > implications if you want to foster a non-brittle client base which will > better survive evolutionary changes in your application. > > Cheers, > Mike > > >
Is 1 and 2 really separate, or just being handled via the accept header. Wouldn't microformats really belong to 3? I discover microformats through parsing the data, and there could be multiple present. As to 3, Subbu and I had a bunch of chats on this a while back. Some sort of marker/element within the representation that says 'I am more than just xml, i am a procurement request' with xhtml css/attributes can do that. There's no standard way to do this via other formats. The complications this adds is forcing you to read ahead to know how to process. Today you lose visibility in that these additional semantics are not part of the uniform interface. Sent from my Windows Phone ------------------------------ From: Jorn Wildt Sent: 12/29/2011 1:36 AM To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Re: The "new media types are evil" meme > Thus we get "two layers of negotiation": one at the level of REST (the format of the data - XML/JSON/RDF/XHTML/CSV/etc.) and one at a level higher up (the meaning of the data - the ontology, or semantical interpretation of the data). Actually, we might even get three layers in some cases: 1: choice of format (XML/JSON/XHTML/etc) 2: choice of encoding for some formats (XHTML + RDFa or HTML + microformat) 3: choice of semantics (XHTML + RDFa + Ontology) /Jørn
Having read the other thread I see now what you mean on the negotiation aspects for microformats, like 'give me xhtml with this microformat'. That is an interesting problem. My gut reaction is it would be ideal if this could be expressed via the accept header as it really is part of the client prefs. Sent from my Windows Phone ------------------------------ From: Jorn Wildt Sent: 12/29/2011 1:36 AM To: rest-discuss@yahoogroups.com Subject: [rest-discuss] Re: The "new media types are evil" meme > Thus we get "two layers of negotiation": one at the level of REST (the format of the data - XML/JSON/RDF/XHTML/CSV/etc.) and one at a level higher up (the meaning of the data - the ontology, or semantical interpretation of the data). Actually, we might even get three layers in some cases: 1: choice of format (XML/JSON/XHTML/etc) 2: choice of encoding for some formats (XHTML + RDFa or HTML + microformat) 3: choice of semantics (XHTML + RDFa + Ontology) /Jørn
I use URLs for the relations which correspond to a web page containing the documentation. Yes, I'm saying no new media type - just define your application in terms of link relations and expose their documentation somewhere that is visible to your consumers. You could register them with IANA, create an internal registry.. personally I just create URLs as necessary on a server that I know I can control over time. On Thu, Dec 29, 2011 at 8:16 PM, Glenn Block <glenn.block@...> wrote: > ** > > > Where are you documenting the links? Are you saying no new media type, but > link relations are registered? > > Sent from my Windows Phone > ------------------------------ > From: Mike Kelly > Sent: 12/29/2011 11:37 AM > To: Glenn Block > Cc: Jan Algermissen; Paul Cohen; Erik Mogensen; REST-Discuss Discussion > Group; craigmcc@... > > Subject: Re: [rest-discuss] The "new media types are evil" meme > > Yes, the link relation of the referring link should be documented so the > client can interpret the targeted resource. > > Is that controversial? :) > > On Thu, Dec 29, 2011 at 7:19 PM, Glenn Block <glenn.block@...>wrote: > >> ** >> >> >> How does the client know how to handle the response body? Are you saying >> the link documentation tells them here is what you should expect? >> >> Sent from my Windows Phone >> ------------------------------ >> From: Mike Kelly >> Sent: 12/29/2011 9:13 AM >> To: Jan Algermissen >> Cc: Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; >> craigmcc@... >> Subject: Re: [rest-discuss] The "new media types are evil" meme >> >> >> >> >> >> On Thu, Dec 29, 2011 at 3:24 PM, Jan Algermissen < >> jan.algermissen@...> wrote: >> >>> ** >>> >>> >>> >>> On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: >>> >>> > format types >>> >>> Media types not only specify syntax, they also specify intended >>> processing. E.g. application/atom+xml does not only refer to the schema, it >>> also refers to all the stuff written in that spec. application/atomsrv+xml >>> not only defines the <service> schema, it also defines a whole bunch of >>> expectations for clients and servers. >>> >>> If you say: >>> >>> Accept: application/atomsrv+xml >>> >>> you say a whole lot more than "I am able to parse <service> documents". >>> You are saying: "I can carry on with my realized use case if you answer me >>> in application/atomsrv+xml" >>> >> >> What does this buy you vs Accepting something generic like hal or even >> just plain xml? >> >> >>> >>> Likewise, if you receive: >>> >>> 200 Ok >>> Content-Type: application/atomsrv+xml >>> >>> <service>...</service> >>> >>> there is vastly more stuff you as a client can assume besides "This >>> entity will come in that schema". You will, for example, know that any >>> <collection href="/foo"> points to an AtomPub collection and that there is >>> a bunch of stuff you likely can do with it. >>> >>> >> Those additional assumptions can instead be made by understanding the >> link which led the client there, which should be the case for any resource >> except entry points. Exposing an app this way coaxes clients into >> traversing your application properly (out from entry points by following >> links), and it implies to consumers of your app that the representation's >> purpose and structure are impermanent - both of these are important >> implications if you want to foster a non-brittle client base which will >> better survive evolutionary changes in your application. >> >> Cheers, >> Mike >> >> > >
What about the root url / entry point? How would you document that, it has no rel...... I can see the value of this approach in that you are gaining visibility of the specific semantics via relations and still keeping to a predefined set of media types rather than minting new types. Sent from my Windows Phone ------------------------------ From: Mike Kelly Sent: 12/29/2011 12:45 PM To: Glenn Block Cc: Jan Algermissen; Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; craigmcc@... Subject: Re: [rest-discuss] The "new media types are evil" meme I use URLs for the relations which correspond to a web page containing the documentation. Yes, I'm saying no new media type - just define your application in terms of link relations and expose their documentation somewhere that is visible to your consumers. You could register them with IANA, create an internal registry.. personally I just create URLs as necessary on a server that I know I can control over time. On Thu, Dec 29, 2011 at 8:16 PM, Glenn Block <glenn.block@...m> wrote: > ** > > > Where are you documenting the links? Are you saying no new media type, but > link relations are registered? > > Sent from my Windows Phone > ------------------------------ > From: Mike Kelly > Sent: 12/29/2011 11:37 AM > To: Glenn Block > Cc: Jan Algermissen; Paul Cohen; Erik Mogensen; REST-Discuss Discussion > Group; craigmcc@... > > Subject: Re: [rest-discuss] The "new media types are evil" meme > > Yes, the link relation of the referring link should be documented so the > client can interpret the targeted resource. > > Is that controversial? :) > > On Thu, Dec 29, 2011 at 7:19 PM, Glenn Block <glenn.block@...>wrote: > >> ** >> >> >> How does the client know how to handle the response body? Are you saying >> the link documentation tells them here is what you should expect? >> >> Sent from my Windows Phone >> ------------------------------ >> From: Mike Kelly >> Sent: 12/29/2011 9:13 AM >> To: Jan Algermissen >> Cc: Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; >> craigmcc@... >> Subject: Re: [rest-discuss] The "new media types are evil" meme >> >> >> >> >> >> On Thu, Dec 29, 2011 at 3:24 PM, Jan Algermissen < >> jan.algermissen@nordsc.com> wrote: >> >>> ** >>> >>> >>> >>> On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: >>> >>> > format types >>> >>> Media types not only specify syntax, they also specify intended >>> processing. E.g. application/atom+xml does not only refer to the schema, it >>> also refers to all the stuff written in that spec. application/atomsrv+xml >>> not only defines the <service> schema, it also defines a whole bunch of >>> expectations for clients and servers. >>> >>> If you say: >>> >>> Accept: application/atomsrv+xml >>> >>> you say a whole lot more than "I am able to parse <service> documents". >>> You are saying: "I can carry on with my realized use case if you answer me >>> in application/atomsrv+xml" >>> >> >> What does this buy you vs Accepting something generic like hal or even >> just plain xml? >> >> >>> >>> Likewise, if you receive: >>> >>> 200 Ok >>> Content-Type: application/atomsrv+xml >>> >>> <service>...</service> >>> >>> there is vastly more stuff you as a client can assume besides "This >>> entity will come in that schema". You will, for example, know that any >>> <collection href="/foo"> points to an AtomPub collection and that there is >>> a bunch of stuff you likely can do with it. >>> >>> >> Those additional assumptions can instead be made by understanding the >> link which led the client there, which should be the case for any resource >> except entry points. Exposing an app this way coaxes clients into >> traversing your application properly (out from entry points by following >> links), and it implies to consumers of your app that the representation's >> purpose and structure are impermanent - both of these are important >> implications if you want to foster a non-brittle client base which will >> better survive evolutionary changes in your application. >> >> Cheers, >> Mike >> >> > >
Well that's what makes an entry point an entry point, right? On Thu, Dec 29, 2011 at 8:53 PM, Glenn Block <glenn.block@...> wrote: > ** > > > What about the root url / entry point? How would you document that, it has > no rel...... > > I can see the value of this approach in that you are gaining visibility of > the specific semantics via relations and still keeping to a predefined set > of media types rather than minting new types. > > > Sent from my Windows Phone > ------------------------------ > From: Mike Kelly > Sent: 12/29/2011 12:45 PM > > To: Glenn Block > Cc: Jan Algermissen; Paul Cohen; Erik Mogensen; REST-Discuss Discussion > Group; craigmcc@gmail.com > Subject: Re: [rest-discuss] The "new media types are evil" meme > > I use URLs for the relations which correspond to a web page containing the > documentation. > > Yes, I'm saying no new media type - just define your application in terms > of link relations and expose their documentation somewhere that is visible > to your consumers. You could register them with IANA, create an internal > registry.. personally I just create URLs as necessary on a server that I > know I can control over time. > > > On Thu, Dec 29, 2011 at 8:16 PM, Glenn Block <glenn.block@...>wrote: > >> ** >> >> >> Where are you documenting the links? Are you saying no new media type, >> but link relations are registered? >> >> Sent from my Windows Phone >> ------------------------------ >> From: Mike Kelly >> Sent: 12/29/2011 11:37 AM >> To: Glenn Block >> Cc: Jan Algermissen; Paul Cohen; Erik Mogensen; REST-Discuss Discussion >> Group; craigmcc@... >> >> Subject: Re: [rest-discuss] The "new media types are evil" meme >> >> Yes, the link relation of the referring link should be documented so the >> client can interpret the targeted resource. >> >> Is that controversial? :) >> >> On Thu, Dec 29, 2011 at 7:19 PM, Glenn Block <glenn.block@...>wrote: >> >>> ** >>> >>> >>> How does the client know how to handle the response body? Are you saying >>> the link documentation tells them here is what you should expect? >>> >>> Sent from my Windows Phone >>> ------------------------------ >>> From: Mike Kelly >>> Sent: 12/29/2011 9:13 AM >>> To: Jan Algermissen >>> Cc: Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; >>> craigmcc@gmail.com >>> Subject: Re: [rest-discuss] The "new media types are evil" meme >>> >>> >>> >>> >>> >>> On Thu, Dec 29, 2011 at 3:24 PM, Jan Algermissen < >>> jan.algermissen@...> wrote: >>> >>>> ** >>>> >>>> >>>> >>>> On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: >>>> >>>> > format types >>>> >>>> Media types not only specify syntax, they also specify intended >>>> processing. E.g. application/atom+xml does not only refer to the schema, it >>>> also refers to all the stuff written in that spec. application/atomsrv+xml >>>> not only defines the <service> schema, it also defines a whole bunch of >>>> expectations for clients and servers. >>>> >>>> If you say: >>>> >>>> Accept: application/atomsrv+xml >>>> >>>> you say a whole lot more than "I am able to parse <service> documents". >>>> You are saying: "I can carry on with my realized use case if you answer me >>>> in application/atomsrv+xml" >>>> >>> >>> What does this buy you vs Accepting something generic like hal or even >>> just plain xml? >>> >>> >>>> >>>> Likewise, if you receive: >>>> >>>> 200 Ok >>>> Content-Type: application/atomsrv+xml >>>> >>>> <service>...</service> >>>> >>>> there is vastly more stuff you as a client can assume besides "This >>>> entity will come in that schema". You will, for example, know that any >>>> <collection href="/foo"> points to an AtomPub collection and that there is >>>> a bunch of stuff you likely can do with it. >>>> >>>> >>> Those additional assumptions can instead be made by understanding the >>> link which led the client there, which should be the case for any resource >>> except entry points. Exposing an app this way coaxes clients into >>> traversing your application properly (out from entry points by following >>> links), and it implies to consumers of your app that the representation's >>> purpose and structure are impermanent - both of these are important >>> implications if you want to foster a non-brittle client base which will >>> better survive evolutionary changes in your application. >>> >>> Cheers, >>> Mike >>> >>> >> > >
Well yeah, but the advantage of using the link rels is I can discover the url to find the docs. The root url docs are not discoverable in the same way / visible. Meaning it is not something present within the response headers / body, that is unless you somehow annotate the response or have a standard mechanism to look up documentation for an associated root url. For example (and no I am not saying this is a good idea), let's you had a standard link relation similar to SELF which pointed to documentation about ME. Of course you could get that with a custom media type registered in IANA, but that is what this approach was avoiding. On Thu, Dec 29, 2011 at 1:02 PM, Mike Kelly <mike@...> wrote: > Well that's what makes an entry point an entry point, right? > > On Thu, Dec 29, 2011 at 8:53 PM, Glenn Block <glenn.block@...>wrote: > >> ** >> >> >> What about the root url / entry point? How would you document that, it >> has no rel...... >> >> I can see the value of this approach in that you are gaining visibility >> of the specific semantics via relations and still keeping to a predefined >> set of media types rather than minting new types. >> >> >> Sent from my Windows Phone >> ------------------------------ >> From: Mike Kelly >> Sent: 12/29/2011 12:45 PM >> >> To: Glenn Block >> Cc: Jan Algermissen; Paul Cohen; Erik Mogensen; REST-Discuss Discussion >> Group; craigmcc@... >> Subject: Re: [rest-discuss] The "new media types are evil" meme >> >> I use URLs for the relations which correspond to a web page containing >> the documentation. >> >> Yes, I'm saying no new media type - just define your application in terms >> of link relations and expose their documentation somewhere that is visible >> to your consumers. You could register them with IANA, create an internal >> registry.. personally I just create URLs as necessary on a server that I >> know I can control over time. >> >> >> On Thu, Dec 29, 2011 at 8:16 PM, Glenn Block <glenn.block@...>wrote: >> >>> ** >>> >>> >>> Where are you documenting the links? Are you saying no new media type, >>> but link relations are registered? >>> >>> Sent from my Windows Phone >>> ------------------------------ >>> From: Mike Kelly >>> Sent: 12/29/2011 11:37 AM >>> To: Glenn Block >>> Cc: Jan Algermissen; Paul Cohen; Erik Mogensen; REST-Discuss Discussion >>> Group; craigmcc@... >>> >>> Subject: Re: [rest-discuss] The "new media types are evil" meme >>> >>> Yes, the link relation of the referring link should be documented so the >>> client can interpret the targeted resource. >>> >>> Is that controversial? :) >>> >>> On Thu, Dec 29, 2011 at 7:19 PM, Glenn Block <glenn.block@...>wrote: >>> >>>> ** >>>> >>>> >>>> How does the client know how to handle the response body? Are you >>>> saying the link documentation tells them here is what you should expect? >>>> >>>> Sent from my Windows Phone >>>> ------------------------------ >>>> From: Mike Kelly >>>> Sent: 12/29/2011 9:13 AM >>>> To: Jan Algermissen >>>> Cc: Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; >>>> craigmcc@... >>>> Subject: Re: [rest-discuss] The "new media types are evil" meme >>>> >>>> >>>> >>>> >>>> >>>> On Thu, Dec 29, 2011 at 3:24 PM, Jan Algermissen < >>>> jan.algermissen@...> wrote: >>>> >>>>> ** >>>>> >>>>> >>>>> >>>>> On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: >>>>> >>>>> > format types >>>>> >>>>> Media types not only specify syntax, they also specify intended >>>>> processing. E.g. application/atom+xml does not only refer to the schema, it >>>>> also refers to all the stuff written in that spec. application/atomsrv+xml >>>>> not only defines the <service> schema, it also defines a whole bunch of >>>>> expectations for clients and servers. >>>>> >>>>> If you say: >>>>> >>>>> Accept: application/atomsrv+xml >>>>> >>>>> you say a whole lot more than "I am able to parse <service> >>>>> documents". You are saying: "I can carry on with my realized use case if >>>>> you answer me in application/atomsrv+xml" >>>>> >>>> >>>> What does this buy you vs Accepting something generic like hal or even >>>> just plain xml? >>>> >>>> >>>>> >>>>> Likewise, if you receive: >>>>> >>>>> 200 Ok >>>>> Content-Type: application/atomsrv+xml >>>>> >>>>> <service>...</service> >>>>> >>>>> there is vastly more stuff you as a client can assume besides "This >>>>> entity will come in that schema". You will, for example, know that any >>>>> <collection href="/foo"> points to an AtomPub collection and that there is >>>>> a bunch of stuff you likely can do with it. >>>>> >>>>> >>>> Those additional assumptions can instead be made by understanding the >>>> link which led the client there, which should be the case for any resource >>>> except entry points. Exposing an app this way coaxes clients into >>>> traversing your application properly (out from entry points by following >>>> links), and it implies to consumers of your app that the representation's >>>> purpose and structure are impermanent - both of these are important >>>> implications if you want to foster a non-brittle client base which will >>>> better survive evolutionary changes in your application. >>>> >>>> Cheers, >>>> Mike >>>> >>>> >>> >> >> > >
"Maybe" even put something in a link header, which would be discoverable outside of parsing the content. On Thu, Dec 29, 2011 at 2:07 PM, Glenn Block <glenn.block@...> wrote: > Well yeah, but the advantage of using the link rels is I can discover the > url to find the docs. The root url docs are not discoverable in the same > way / visible. > > Meaning it is not something present within the response headers / body, > that is unless you somehow annotate the response or have a standard > mechanism to look up documentation for an associated root url. For example > (and no I am not saying this is a good idea), let's you had a standard link > relation similar to SELF which pointed to documentation about ME. Of course > you could get that with a custom media type registered in IANA, but that is > what this approach was avoiding. > > On Thu, Dec 29, 2011 at 1:02 PM, Mike Kelly <mike@mykanjo.co.uk> wrote: > >> Well that's what makes an entry point an entry point, right? >> >> On Thu, Dec 29, 2011 at 8:53 PM, Glenn Block <glenn.block@gmail.com>wrote: >> >>> ** >>> >>> >>> What about the root url / entry point? How would you document that, it >>> has no rel...... >>> >>> I can see the value of this approach in that you are gaining visibility >>> of the specific semantics via relations and still keeping to a predefined >>> set of media types rather than minting new types. >>> >>> >>> Sent from my Windows Phone >>> ------------------------------ >>> From: Mike Kelly >>> Sent: 12/29/2011 12:45 PM >>> >>> To: Glenn Block >>> Cc: Jan Algermissen; Paul Cohen; Erik Mogensen; REST-Discuss Discussion >>> Group; craigmcc@... >>> Subject: Re: [rest-discuss] The "new media types are evil" meme >>> >>> I use URLs for the relations which correspond to a web page containing >>> the documentation. >>> >>> Yes, I'm saying no new media type - just define your application in >>> terms of link relations and expose their documentation somewhere that is >>> visible to your consumers. You could register them with IANA, create an >>> internal registry.. personally I just create URLs as necessary on a server >>> that I know I can control over time. >>> >>> >>> On Thu, Dec 29, 2011 at 8:16 PM, Glenn Block <glenn.block@...>wrote: >>> >>>> ** >>>> >>>> >>>> Where are you documenting the links? Are you saying no new media type, >>>> but link relations are registered? >>>> >>>> Sent from my Windows Phone >>>> ------------------------------ >>>> From: Mike Kelly >>>> Sent: 12/29/2011 11:37 AM >>>> To: Glenn Block >>>> Cc: Jan Algermissen; Paul Cohen; Erik Mogensen; REST-Discuss >>>> Discussion Group; craigmcc@... >>>> >>>> Subject: Re: [rest-discuss] The "new media types are evil" meme >>>> >>>> Yes, the link relation of the referring link should be documented so >>>> the client can interpret the targeted resource. >>>> >>>> Is that controversial? :) >>>> >>>> On Thu, Dec 29, 2011 at 7:19 PM, Glenn Block <glenn.block@...>wrote: >>>> >>>>> ** >>>>> >>>>> >>>>> How does the client know how to handle the response body? Are you >>>>> saying the link documentation tells them here is what you should expect? >>>>> >>>>> Sent from my Windows Phone >>>>> ------------------------------ >>>>> From: Mike Kelly >>>>> Sent: 12/29/2011 9:13 AM >>>>> To: Jan Algermissen >>>>> Cc: Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; >>>>> craigmcc@... >>>>> Subject: Re: [rest-discuss] The "new media types are evil" meme >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Thu, Dec 29, 2011 at 3:24 PM, Jan Algermissen < >>>>> jan.algermissen@...> wrote: >>>>> >>>>>> ** >>>>>> >>>>>> >>>>>> >>>>>> On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: >>>>>> >>>>>> > format types >>>>>> >>>>>> Media types not only specify syntax, they also specify intended >>>>>> processing. E.g. application/atom+xml does not only refer to the schema, it >>>>>> also refers to all the stuff written in that spec. application/atomsrv+xml >>>>>> not only defines the <service> schema, it also defines a whole bunch of >>>>>> expectations for clients and servers. >>>>>> >>>>>> If you say: >>>>>> >>>>>> Accept: application/atomsrv+xml >>>>>> >>>>>> you say a whole lot more than "I am able to parse <service> >>>>>> documents". You are saying: "I can carry on with my realized use case if >>>>>> you answer me in application/atomsrv+xml" >>>>>> >>>>> >>>>> What does this buy you vs Accepting something generic like hal or even >>>>> just plain xml? >>>>> >>>>> >>>>>> >>>>>> Likewise, if you receive: >>>>>> >>>>>> 200 Ok >>>>>> Content-Type: application/atomsrv+xml >>>>>> >>>>>> <service>...</service> >>>>>> >>>>>> there is vastly more stuff you as a client can assume besides "This >>>>>> entity will come in that schema". You will, for example, know that any >>>>>> <collection href="/foo"> points to an AtomPub collection and that there is >>>>>> a bunch of stuff you likely can do with it. >>>>>> >>>>>> >>>>> Those additional assumptions can instead be made by understanding the >>>>> link which led the client there, which should be the case for any resource >>>>> except entry points. Exposing an app this way coaxes clients into >>>>> traversing your application properly (out from entry points by following >>>>> links), and it implies to consumers of your app that the representation's >>>>> purpose and structure are impermanent - both of these are important >>>>> implications if you want to foster a non-brittle client base which will >>>>> better survive evolutionary changes in your application. >>>>> >>>>> Cheers, >>>>> Mike >>>>> >>>>> >>>> >>> >>> >> >> >
On Dec 29, 2011, at 8:37 PM, Mike Kelly wrote: > Yes, the link relation of the referring link should be documented so the client can interpret the targeted resource. > > Is that controversial? :) No - REST's constraints explicitly forbid this form of coupling. The client interprets the response based on the current response. NOT based on expectations about the resource. Expectations about resources (made due to discovered link semantics) drive the construction of the request. They do not apply to the response (aka 'self describing messages constraint'). Jan > > On Thu, Dec 29, 2011 at 7:19 PM, Glenn Block <glenn.block@...> wrote: > > > How does the client know how to handle the response body? Are you saying the link documentation tells them here is what you should expect? > > Sent from my Windows Phone > From: Mike Kelly > Sent: 12/29/2011 9:13 AM > To: Jan Algermissen > Cc: Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; craigmcc@gmail.com > Subject: Re: [rest-discuss] The "new media types are evil" meme > > > > > > On Thu, Dec 29, 2011 at 3:24 PM, Jan Algermissen <jan.algermissen@...> wrote: > > > > On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: > > > format types > > Media types not only specify syntax, they also specify intended processing. E.g. application/atom+xml does not only refer to the schema, it also refers to all the stuff written in that spec. application/atomsrv+xml not only defines the <service> schema, it also defines a whole bunch of expectations for clients and servers. > > If you say: > > Accept: application/atomsrv+xml > > you say a whole lot more than "I am able to parse <service> documents". You are saying: "I can carry on with my realized use case if you answer me in application/atomsrv+xml" > > What does this buy you vs Accepting something generic like hal or even just plain xml? > > > Likewise, if you receive: > > 200 Ok > Content-Type: application/atomsrv+xml > > <service>...</service> > > there is vastly more stuff you as a client can assume besides "This entity will come in that schema". You will, for example, know that any <collection href="/foo"> points to an AtomPub collection and that there is a bunch of stuff you likely can do with it. > > > Those additional assumptions can instead be made by understanding the link which led the client there, which should be the case for any resource except entry points. Exposing an app this way coaxes clients into traversing your application properly (out from entry points by following links), and it implies to consumers of your app that the representation's purpose and structure are impermanent - both of these are important implications if you want to foster a non-brittle client base which will better survive evolutionary changes in your application. > > Cheers, > Mike > > > >
Meaning response is determined via conneg, not fixed based on the uri? Glenn On Thu, Dec 29, 2011 at 3:44 PM, Jan Algermissen <jan.algermissen@nordsc.com > wrote: > > On Dec 29, 2011, at 8:37 PM, Mike Kelly wrote: > > > Yes, the link relation of the referring link should be documented so the > client can interpret the targeted resource. > > > > Is that controversial? :) > > No - REST's constraints explicitly forbid this form of coupling. The > client interprets the response based on the current response. NOT based on > expectations about the resource. > > Expectations about resources (made due to discovered link semantics) drive > the construction of the request. They do not apply to the response (aka > 'self describing messages constraint'). > > Jan > > > > > > On Thu, Dec 29, 2011 at 7:19 PM, Glenn Block <glenn.block@...> > wrote: > > > > > > How does the client know how to handle the response body? Are you saying > the link documentation tells them here is what you should expect? > > > > Sent from my Windows Phone > > From: Mike Kelly > > Sent: 12/29/2011 9:13 AM > > To: Jan Algermissen > > Cc: Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; > craigmcc@gmail.com > > Subject: Re: [rest-discuss] The "new media types are evil" meme > > > > > > > > > > > > On Thu, Dec 29, 2011 at 3:24 PM, Jan Algermissen < > jan.algermissen@...> wrote: > > > > > > > > On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: > > > > > format types > > > > Media types not only specify syntax, they also specify intended > processing. E.g. application/atom+xml does not only refer to the schema, it > also refers to all the stuff written in that spec. application/atomsrv+xml > not only defines the <service> schema, it also defines a whole bunch of > expectations for clients and servers. > > > > If you say: > > > > Accept: application/atomsrv+xml > > > > you say a whole lot more than "I am able to parse <service> documents". > You are saying: "I can carry on with my realized use case if you answer me > in application/atomsrv+xml" > > > > What does this buy you vs Accepting something generic like hal or even > just plain xml? > > > > > > Likewise, if you receive: > > > > 200 Ok > > Content-Type: application/atomsrv+xml > > > > <service>...</service> > > > > there is vastly more stuff you as a client can assume besides "This > entity will come in that schema". You will, for example, know that any > <collection href="/foo"> points to an AtomPub collection and that there is > a bunch of stuff you likely can do with it. > > > > > > Those additional assumptions can instead be made by understanding the > link which led the client there, which should be the case for any resource > except entry points. Exposing an app this way coaxes clients into > traversing your application properly (out from entry points by following > links), and it implies to consumers of your app that the representation's > purpose and structure are impermanent - both of these are important > implications if you want to foster a non-brittle client base which will > better survive evolutionary changes in your application. > > > > Cheers, > > Mike > > > > > > > > > >
On Thu, Dec 29, 2011 at 11:44 PM, Jan Algermissen <jan.algermissen@...> wrote: > > On Dec 29, 2011, at 8:37 PM, Mike Kelly wrote: > >> Yes, the link relation of the referring link should be documented so the client can interpret the targeted resource. >> >> Is that controversial? :) > > No - REST's constraints explicitly forbid this form of coupling. The client interprets the response based on the current response. NOT based on expectations about the resource. > I disagree, which constraint are you claiming is explicit about this? Are there any specific quotes from the dissertation you are thinking about? > > Expectations about resources (made due to discovered link semantics) drive the construction of the request. They do not apply to the response (aka 'self describing messages constraint'). > The purpose of self-descriptiveness is to enable intermediate processing, so why do you insist application semantics in the message body must be processable by intermediaries? In fact, the kind of semantics that should be self-descriptive in the message body are general mechanisms like ESI (not application semantics). Incidentally, a capability I'm looking work into HAL is something equivalent to ESI - for that very reason. couple of relevant quotes from the dissertation: "REST enables intermediate processing by constraining messages to be self-descriptive" "Within REST, intermediary components can actively transform the content of messages because the messages are self-descriptive and their semantics are visible to intermediaries" Cheers, Mike
On Dec 30, 2011, at 1:09 AM, Glenn Block wrote: > Meaning response is determined via conneg, not fixed based on the uri? Neither. The meaning of the response is entirely in the response. Jan > > > Glenn > > On Thu, Dec 29, 2011 at 3:44 PM, Jan Algermissen <jan.algermissen@...> wrote: > > On Dec 29, 2011, at 8:37 PM, Mike Kelly wrote: > > > Yes, the link relation of the referring link should be documented so the client can interpret the targeted resource. > > > > Is that controversial? :) > > No - REST's constraints explicitly forbid this form of coupling. The client interprets the response based on the current response. NOT based on expectations about the resource. > > Expectations about resources (made due to discovered link semantics) drive the construction of the request. They do not apply to the response (aka 'self describing messages constraint'). > > Jan > > > > > > On Thu, Dec 29, 2011 at 7:19 PM, Glenn Block <glenn.block@...> wrote: > > > > > > How does the client know how to handle the response body? Are you saying the link documentation tells them here is what you should expect? > > > > Sent from my Windows Phone > > From: Mike Kelly > > Sent: 12/29/2011 9:13 AM > > To: Jan Algermissen > > Cc: Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; craigmcc@... > > Subject: Re: [rest-discuss] The "new media types are evil" meme > > > > > > > > > > > > On Thu, Dec 29, 2011 at 3:24 PM, Jan Algermissen <jan.algermissen@...> wrote: > > > > > > > > On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: > > > > > format types > > > > Media types not only specify syntax, they also specify intended processing. E.g. application/atom+xml does not only refer to the schema, it also refers to all the stuff written in that spec. application/atomsrv+xml not only defines the <service> schema, it also defines a whole bunch of expectations for clients and servers. > > > > If you say: > > > > Accept: application/atomsrv+xml > > > > you say a whole lot more than "I am able to parse <service> documents". You are saying: "I can carry on with my realized use case if you answer me in application/atomsrv+xml" > > > > What does this buy you vs Accepting something generic like hal or even just plain xml? > > > > > > Likewise, if you receive: > > > > 200 Ok > > Content-Type: application/atomsrv+xml > > > > <service>...</service> > > > > there is vastly more stuff you as a client can assume besides "This entity will come in that schema". You will, for example, know that any <collection href="/foo"> points to an AtomPub collection and that there is a bunch of stuff you likely can do with it. > > > > > > Those additional assumptions can instead be made by understanding the link which led the client there, which should be the case for any resource except entry points. Exposing an app this way coaxes clients into traversing your application properly (out from entry points by following links), and it implies to consumers of your app that the representation's purpose and structure are impermanent - both of these are important implications if you want to foster a non-brittle client base which will better survive evolutionary changes in your application. > > > > Cheers, > > Mike > > > > > > > > > > > >
Jan If the link relation is documented / registered, where is the harm if it says in the documentation these are supported media types (not necessarily being the exclusive set) and this is how the data will look for those types? On Thu, Dec 29, 2011 at 4:26 PM, Mike Kelly <mike@...> wrote: > On Thu, Dec 29, 2011 at 11:44 PM, Jan Algermissen > <jan.algermissen@...> wrote: > > > > On Dec 29, 2011, at 8:37 PM, Mike Kelly wrote: > > > >> Yes, the link relation of the referring link should be documented so > the client can interpret the targeted resource. > >> > >> Is that controversial? :) > > > > No - REST's constraints explicitly forbid this form of coupling. The > client interprets the response based on the current response. NOT based on > expectations about the resource. > > > > I disagree, which constraint are you claiming is explicit about this? > Are there any specific quotes from the dissertation you are thinking > about? > > > > > Expectations about resources (made due to discovered link semantics) > drive the construction of the request. They do not apply to the response > (aka 'self describing messages constraint'). > > > > The purpose of self-descriptiveness is to enable intermediate > processing, so why do you insist application semantics in the message > body must be processable by intermediaries? > > In fact, the kind of semantics that should be self-descriptive in the > message body are general mechanisms like ESI (not application > semantics). Incidentally, a capability I'm looking work into HAL is > something equivalent to ESI - for that very reason. > > couple of relevant quotes from the dissertation: > > "REST enables intermediate processing by constraining messages to be > self-descriptive" > > "Within REST, intermediary components can actively transform the > content of messages because the messages are self-descriptive and > their semantics are visible to intermediaries" > > Cheers, > Mike >
On Dec 30, 2011, at 3:35 AM, Glenn Block wrote: > Jan > > If the link relation is documented / registered, where is the harm if it says in the documentation these are supported media types (not necessarily being the exclusive set) and this is how the data will look for those types? Coupling. The server might change and send something different than you expect. In order to facilitate fragmented change of system components REST deliberately rules out such coupling. If clients and servers were coupled around such expectations, how could a server ever change without informing all clients? Conneg is the mechanism that allows for capability negotiation at runtime (instead of at design time with the inevitable coupling) and besides that the burden really is on the client to deal with *any* response in a sensible way. IOW, you cannot expect the server to behave in a certain, application specific way. HTTP *is* the application protocol already. There is no next layer. See my initial question and Roy's reply in this thread: http://www.imc.org/atom-protocol/mail-archive/msg11463.html http://www.imc.org/atom-protocol/mail-archive/msg11487.html Jan > > On Thu, Dec 29, 2011 at 4:26 PM, Mike Kelly <mike@...> wrote: > On Thu, Dec 29, 2011 at 11:44 PM, Jan Algermissen > <jan.algermissen@...> wrote: > > > > On Dec 29, 2011, at 8:37 PM, Mike Kelly wrote: > > > >> Yes, the link relation of the referring link should be documented so the client can interpret the targeted resource. > >> > >> Is that controversial? :) > > > > No - REST's constraints explicitly forbid this form of coupling. The client interprets the response based on the current response. NOT based on expectations about the resource. > > > > I disagree, which constraint are you claiming is explicit about this? > Are there any specific quotes from the dissertation you are thinking > about? > > > > > Expectations about resources (made due to discovered link semantics) drive the construction of the request. They do not apply to the response (aka 'self describing messages constraint'). > > > > The purpose of self-descriptiveness is to enable intermediate > processing, so why do you insist application semantics in the message > body must be processable by intermediaries? > > In fact, the kind of semantics that should be self-descriptive in the > message body are general mechanisms like ESI (not application > semantics). Incidentally, a capability I'm looking work into HAL is > something equivalent to ESI - for that very reason. > > couple of relevant quotes from the dissertation: > > "REST enables intermediate processing by constraining messages to be > self-descriptive" > > "Within REST, intermediary components can actively transform the > content of messages because the messages are self-descriptive and > their semantics are visible to intermediaries" > > Cheers, > Mike >
I would say that root urls/entry points are the elements that are actually documented "out of band". If I decide to use Amazon's webservice then they publish their root url and tell me what to expect there - and I would be rather suprised if it returned fish&chips instead of a bookshop (but, then again, it is Amazon ...:-)
If the root url returns HTML then that could contain a nice description of how to use exactly that webservce ... actually that's already happening isn't it: I haven't looked for it, but Amazon sure has a public HTML page that documents the URLs of their webservice.
It requires a slightly different view on the solution - but you could in principle consider thier documentation page as THE root url of their webservice. Right?
/Jørn
----- Original Message -----
From: Glenn Block
To: Mike Kelly
Cc: Jan Algermissen ; Paul Cohen ; Erik Mogensen ; REST-Discuss Discussion Group ; craigmcc@...
Sent: Thursday, December 29, 2011 11:07 PM
Subject: Re: [rest-discuss] The "new media types are evil" meme
Well yeah, but the advantage of using the link rels is I can discover the url to find the docs. The root url docs are not discoverable in the same way / visible.
Meaning it is not something present within the response headers / body, that is unless you somehow annotate the response or have a standard mechanism to look up documentation for an associated root url. For example (and no I am not saying this is a good idea), let's you had a standard link relation similar to SELF which pointed to documentation about ME. Of course you could get that with a custom media type registered in IANA, but that is what this approach was avoiding.
On Thu, Dec 29, 2011 at 1:02 PM, Mike Kelly <mike@...> wrote:
Well that's what makes an entry point an entry point, right?
On Thu, Dec 29, 2011 at 8:53 PM, Glenn Block <glenn.block@...> wrote:
What about the root url / entry point? How would you document that, it has no rel......
I can see the value of this approach in that you are gaining visibility of the specific semantics via relations and still keeping to a predefined set of media types rather than minting new types.
Sent from my Windows Phone
--------------------------------------------------------------------------
From: Mike Kelly
Sent: 12/29/2011 12:45 PM
To: Glenn Block
Cc: Jan Algermissen; Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; craigmcc@...
Subject: Re: [rest-discuss] The "new media types are evil" meme
I use URLs for the relations which correspond to a web page containing the documentation.
Yes, I'm saying no new media type - just define your application in terms of link relations and expose their documentation somewhere that is visible to your consumers. You could register them with IANA, create an internal registry.. personally I just create URLs as necessary on a server that I know I can control over time.
On Thu, Dec 29, 2011 at 8:16 PM, Glenn Block <glenn.block@...> wrote:
Where are you documenting the links? Are you saying no new media type, but link relations are registered?
Sent from my Windows Phone
------------------------------------------------------------------------
From: Mike Kelly
Sent: 12/29/2011 11:37 AM
To: Glenn Block
Cc: Jan Algermissen; Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; craigmcc@...
Subject: Re: [rest-discuss] The "new media types are evil" meme
Yes, the link relation of the referring link should be documented so the client can interpret the targeted resource.
Is that controversial? :)
On Thu, Dec 29, 2011 at 7:19 PM, Glenn Block <glenn.block@...> wrote:
How does the client know how to handle the response body? Are you saying the link documentation tells them here is what you should expect?
Sent from my Windows Phone
----------------------------------------------------------------------
From: Mike Kelly
Sent: 12/29/2011 9:13 AM
To: Jan Algermissen
Cc: Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; craigmcc@...
Subject: Re: [rest-discuss] The "new media types are evil" meme
On Thu, Dec 29, 2011 at 3:24 PM, Jan Algermissen <jan.algermissen@nordsc.com> wrote:
On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote:
> format types
Media types not only specify syntax, they also specify intended processing. E.g. application/atom+xml does not only refer to the schema, it also refers to all the stuff written in that spec. application/atomsrv+xml not only defines the <service> schema, it also defines a whole bunch of expectations for clients and servers.
If you say:
Accept: application/atomsrv+xml
you say a whole lot more than "I am able to parse <service> documents". You are saying: "I can carry on with my realized use case if you answer me in application/atomsrv+xml"
What does this buy you vs Accepting something generic like hal or even just plain xml?
Likewise, if you receive:
200 Ok
Content-Type: application/atomsrv+xml
<service>...</service>
there is vastly more stuff you as a client can assume besides "This entity will come in that schema". You will, for example, know that any <collection href="/foo"> points to an AtomPub collection and that there is a bunch of stuff you likely can do with it.
Those additional assumptions can instead be made by understanding the link which led the client there, which should be the case for any resource except entry points. Exposing an app this way coaxes clients into traversing your application properly (out from entry points by following links), and it implies to consumers of your app that the representation's purpose and structure are impermanent - both of these are important implications if you want to foster a non-brittle client base which will better survive evolutionary changes in your application.
Cheers,
Mike
On Dec 30, 2011, at 10:28 AM, Jørn Wildt wrote: > > I would say that root urls/entry points are the elements that are actually documented "out of band". If I decide to use Amazon's webservice then they publish their root url and tell me what to expect there - and I would be rather suprised if it returned fish&chips instead of a bookshop (but, then again, it is Amazon ...:-) > > If the root url returns HTML then that could contain a nice description of how to use exactly that webservce ... actually that's already happening isn't it: I haven't looked for it, but Amazon sure has a public HTML page that documents the URLs of their webservice. > > It requires a slightly different view on the solution - but you could in principle consider thier documentation page as THE root url of their webservice. Right? If you use such a description as deign time information, you are creating coupling. If you process such information at run time (aka 'forms') that's fine. (Since you cannot process Amazon's HTML API descriptions automatically at runtime (they are just not design for that purpose) you will inevitably create a non RESTful application when interacting with Amazon. IOW: Amazon's Web services are not RESTful. Not a bit. (Using a specific media type instead of an HTML Api description would have fixed that) Jan > > /Jørn > > > ----- Original Message ----- > From: Glenn Block > To: Mike Kelly > Cc: Jan Algermissen ; Paul Cohen ; Erik Mogensen ; REST-Discuss Discussion Group ; craigmcc@... > Sent: Thursday, December 29, 2011 11:07 PM > Subject: Re: [rest-discuss] The "new media types are evil" meme > > > Well yeah, but the advantage of using the link rels is I can discover the url to find the docs. The root url docs are not discoverable in the same way / visible. > > > Meaning it is not something present within the response headers / body, that is unless you somehow annotate the response or have a standard mechanism to look up documentation for an associated root url. For example (and no I am not saying this is a good idea), let's you had a standard link relation similar to SELF which pointed to documentation about ME. Of course you could get that with a custom media type registered in IANA, but that is what this approach was avoiding. > > On Thu, Dec 29, 2011 at 1:02 PM, Mike Kelly <mike@...> wrote: > Well that's what makes an entry point an entry point, right? > > On Thu, Dec 29, 2011 at 8:53 PM, Glenn Block <glenn.block@...> wrote: > > > What about the root url / entry point? How would you document that, it has no rel...... > > I can see the value of this approach in that you are gaining visibility of the specific semantics via relations and still keeping to a predefined set of media types rather than minting new types. > > > Sent from my Windows Phone > From: Mike Kelly > Sent: 12/29/2011 12:45 PM > > To: Glenn Block > Cc: Jan Algermissen; Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; craigmcc@... > Subject: Re: [rest-discuss] The "new media types are evil" meme > > I use URLs for the relations which correspond to a web page containing the documentation. > > Yes, I'm saying no new media type - just define your application in terms of link relations and expose their documentation somewhere that is visible to your consumers. You could register them with IANA, create an internal registry.. personally I just create URLs as necessary on a server that I know I can control over time. > > > On Thu, Dec 29, 2011 at 8:16 PM, Glenn Block <glenn.block@...> wrote: > > > Where are you documenting the links? Are you saying no new media type, but link relations are registered? > > Sent from my Windows Phone > From: Mike Kelly > Sent: 12/29/2011 11:37 AM > To: Glenn Block > Cc: Jan Algermissen; Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; craigmcc@... > > Subject: Re: [rest-discuss] The "new media types are evil" meme > > Yes, the link relation of the referring link should be documented so the client can interpret the targeted resource. > > Is that controversial? :) > > On Thu, Dec 29, 2011 at 7:19 PM, Glenn Block <glenn.block@gmail.com> wrote: > > > How does the client know how to handle the response body? Are you saying the link documentation tells them here is what you should expect? > > Sent from my Windows Phone > From: Mike Kelly > Sent: 12/29/2011 9:13 AM > To: Jan Algermissen > Cc: Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; craigmcc@... > Subject: Re: [rest-discuss] The "new media types are evil" meme > > > > > > On Thu, Dec 29, 2011 at 3:24 PM, Jan Algermissen <jan.algermissen@...> wrote: > > > > On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: > > > format types > > Media types not only specify syntax, they also specify intended processing. E.g. application/atom+xml does not only refer to the schema, it also refers to all the stuff written in that spec. application/atomsrv+xml not only defines the <service> schema, it also defines a whole bunch of expectations for clients and servers. > > If you say: > > Accept: application/atomsrv+xml > > you say a whole lot more than "I am able to parse <service> documents". You are saying: "I can carry on with my realized use case if you answer me in application/atomsrv+xml" > > What does this buy you vs Accepting something generic like hal or even just plain xml? > > > Likewise, if you receive: > > 200 Ok > Content-Type: application/atomsrv+xml > > <service>...</service> > > there is vastly more stuff you as a client can assume besides "This entity will come in that schema". You will, for example, know that any <collection href="/foo"> points to an AtomPub collection and that there is a bunch of stuff you likely can do with it. > > > Those additional assumptions can instead be made by understanding the link which led the client there, which should be the case for any resource except entry points. Exposing an app this way coaxes clients into traversing your application properly (out from entry points by following links), and it implies to consumers of your app that the representation's purpose and structure are impermanent - both of these are important implications if you want to foster a non-brittle client base which will better survive evolutionary changes in your application. > > Cheers, > Mike > > > > > > > > > > >
On Fri, Dec 30, 2011 at 9:03 AM, Jan Algermissen <jan.algermissen@... > wrote: > ** > > > > On Dec 30, 2011, at 3:35 AM, Glenn Block wrote: > > > Jan > > > > If the link relation is documented / registered, where is the harm if it > says in the documentation these are supported media types (not necessarily > being the exclusive set) and this is how the data will look for those types? > > Coupling. The server might change and send something different than you > expect. In order to facilitate fragmented change of system components REST > deliberately rules out such coupling. If clients and servers were coupled > around such expectations, how could a server ever change without informing > all clients? > > Provided those expectations are still met by the change (i.e. it is a non-breaking change) then this is exactly the behaviour your want from your system. That is how applications should grow and evolve. If the change is breaking then a new set of link relations should be introduced that can run in parallel (aka. 'versioning') and, if necessary, eventually supersede the previous ones (aka 'system wide upgrade'). Cheers, Mike
On Fri, Dec 30, 2011 at 9:35 AM, Jan Algermissen <jan.algermissen@...m> wrote: > > On Dec 30, 2011, at 10:28 AM, Jørn Wildt wrote: > >> >> I would say that root urls/entry points are the elements that are actually documented "out of band". If I decide to use Amazon's webservice then they publish their root url and tell me what to expect there - and I would be rather suprised if it returned fish&chips instead of a bookshop (but, then again, it is Amazon ...:-) > > >> >> If the root url returns HTML then that could contain a nice description of how to use exactly that webservce ... actually that's already happening isn't it: I haven't looked for it, but Amazon sure has a public HTML page that documents the URLs of their webservice. >> >> It requires a slightly different view on the solution - but you could in principle consider thier documentation page as THE root url of their webservice. Right? > > If you use such a description as deign time information, you are creating coupling. If you process such information at run time (aka 'forms') that's fine. > > (Since you cannot process Amazon's HTML API descriptions automatically at runtime (they are just not design for that purpose) you will inevitably create a non RESTful application when interacting with Amazon. IOW: Amazon's Web services are not RESTful. Not a bit. > > (Using a specific media type instead of an HTML Api description would have fixed that) Are you sure about that? "A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience." -- Roy Fielding
On Thu, Dec 29, 2011 at 4:15 PM, Jan Algermissen <jan.algermissen@...> wrote: > On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote: > > However my understanding of Media Type as defined in RFC 2046 > > (http://tools.ietf.org/html/rfc2046) is that it is to be used to > > identify media format types or representation format types, and not > > conceptual types. > > There simply is no notion of 'conceptual type' in REST. Errmh, no. I didn't say that either. I'm saying there are design and implementation aspects of system interfaces that are not covered by HTTP. At some level developers (human beings) need to communicate and reason about the software they write. Every web service provides information about something. Apart from the practical integration of a client with a server we need to be able to discuss the "something" rationale behind a given service. Otherwise I as a developer won't know if the information of a given service is of interest to me. This reasoning between developers (human beings) is at a conceptual level. Furthermore software is not only meant for cumputers, it's also meant for humans to read and understand and reason about. Maybe the term "conceptual type" was unfortunate. My point in the discussion was that it may be of interest to talk about the concepts and information a service is meant to provide in order to then be able to reason about what media types to use or invent for a given service. > Let me say it again: the problem that is being tried to solve does not exist. Is this your way of saying there is nothing to discuss? Or are you saying there is no problem in deciding whether to define new media types or not? My understanding of the discussion was that we were discussing heuristics for inventing (or not inventing) new media types. /Paul -- Paul Cohen www.seibostudios.se mobile: +46 730 787 035 e-mail: paul.cohen@...
hmm, I don't see how it is any different than describing a media type. A media type specification will describe the elements/components that can be present in the content. New elements can be added which do not break existing clients. In the same way this documentation is describing the response. As the system evolves that format can evolve without breaking existing clients. Sent from my Windows Phone From: Jan Algermissen Sent: 12/30/2011 1:03 AM To: Glenn Block Cc: Mike Kelly; Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; craigmcc@... Subject: Re: [rest-discuss] The "new media types are evil" meme On Dec 30, 2011, at 3:35 AM, Glenn Block wrote: > Jan > > If the link relation is documented / registered, where is the harm if it says in the documentation these are supported media types (not necessarily being the exclusive set) and this is how the data will look for those types? Coupling. The server might change and send something different than you expect. In order to facilitate fragmented change of system components REST deliberately rules out such coupling. If clients and servers were coupled around such expectations, how could a server ever change without informing all clients? Conneg is the mechanism that allows for capability negotiation at runtime (instead of at design time with the inevitable coupling) and besides that the burden really is on the client to deal with *any* response in a sensible way. IOW, you cannot expect the server to behave in a certain, application specific way. HTTP *is* the application protocol already. There is no next layer. See my initial question and Roy's reply in this thread: http://www.imc.org/atom-protocol/mail-archive/msg11463.html http://www.imc.org/atom-protocol/mail-archive/msg11487.html Jan > > On Thu, Dec 29, 2011 at 4:26 PM, Mike Kelly <mike@...> wrote: > On Thu, Dec 29, 2011 at 11:44 PM, Jan Algermissen > <jan.algermissen@...> wrote: > > > > On Dec 29, 2011, at 8:37 PM, Mike Kelly wrote: > > > >> Yes, the link relation of the referring link should be documented so the client can interpret the targeted resource. > >> > >> Is that controversial? :) > > > > No - REST's constraints explicitly forbid this form of coupling. The client interprets the response based on the current response. NOT based on expectations about the resource. > > > > I disagree, which constraint are you claiming is explicit about this? > Are there any specific quotes from the dissertation you are thinking > about? > > > > > Expectations about resources (made due to discovered link semantics) drive the construction of the request. They do not apply to the response (aka 'self describing messages constraint'). > > > > The purpose of self-descriptiveness is to enable intermediate > processing, so why do you insist application semantics in the message > body must be processable by intermediaries? > > In fact, the kind of semantics that should be self-descriptive in the > message body are general mechanisms like ESI (not application > semantics). Incidentally, a capability I'm looking work into HAL is > something equivalent to ESI - for that very reason. > > couple of relevant quotes from the dissertation: > > "REST enables intermediate processing by constraining messages to be > self-descriptive" > > "Within REST, intermediary components can actively transform the > content of messages because the messages are self-descriptive and > their semantics are visible to intermediaries" > > Cheers, > Mike >
Sorry it is not describing the response, it is describing the elements/components for a particular representation. Sent from my Windows Phone From: Glenn Block Sent: 12/30/2011 1:58 AM To: Jan Algermissen Cc: Mike Kelly; Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; craigmcc@... Subject: RE: [rest-discuss] The "new media types are evil" meme hmm, I don't see how it is any different than describing a media type. A media type specification will describe the elements/components that can be present in the content. New elements can be added which do not break existing clients. In the same way this documentation is describing the response. As the system evolves that format can evolve without breaking existing clients. Sent from my Windows Phone From: Jan Algermissen Sent: 12/30/2011 1:03 AM To: Glenn Block Cc: Mike Kelly; Paul Cohen; Erik Mogensen; REST-Discuss Discussion Group; craigmcc@... Subject: Re: [rest-discuss] The "new media types are evil" meme On Dec 30, 2011, at 3:35 AM, Glenn Block wrote: > Jan > > If the link relation is documented / registered, where is the harm if it says in the documentation these are supported media types (not necessarily being the exclusive set) and this is how the data will look for those types? Coupling. The server might change and send something different than you expect. In order to facilitate fragmented change of system components REST deliberately rules out such coupling. If clients and servers were coupled around such expectations, how could a server ever change without informing all clients? Conneg is the mechanism that allows for capability negotiation at runtime (instead of at design time with the inevitable coupling) and besides that the burden really is on the client to deal with *any* response in a sensible way. IOW, you cannot expect the server to behave in a certain, application specific way. HTTP *is* the application protocol already. There is no next layer. See my initial question and Roy's reply in this thread: http://www.imc.org/atom-protocol/mail-archive/msg11463.html http://www.imc.org/atom-protocol/mail-archive/msg11487.html Jan > > On Thu, Dec 29, 2011 at 4:26 PM, Mike Kelly <mike@...> wrote: > On Thu, Dec 29, 2011 at 11:44 PM, Jan Algermissen > <jan.algermissen@...> wrote: > > > > On Dec 29, 2011, at 8:37 PM, Mike Kelly wrote: > > > >> Yes, the link relation of the referring link should be documented so the client can interpret the targeted resource. > >> > >> Is that controversial? :) > > > > No - REST's constraints explicitly forbid this form of coupling. The client interprets the response based on the current response. NOT based on expectations about the resource. > > > > I disagree, which constraint are you claiming is explicit about this? > Are there any specific quotes from the dissertation you are thinking > about? > > > > > Expectations about resources (made due to discovered link semantics) drive the construction of the request. They do not apply to the response (aka 'self describing messages constraint'). > > > > The purpose of self-descriptiveness is to enable intermediate > processing, so why do you insist application semantics in the message > body must be processable by intermediaries? > > In fact, the kind of semantics that should be self-descriptive in the > message body are general mechanisms like ESI (not application > semantics). Incidentally, a capability I'm looking work into HAL is > something equivalent to ESI - for that very reason. > > couple of relevant quotes from the dissertation: > > "REST enables intermediate processing by constraining messages to be > self-descriptive" > > "Within REST, intermediary components can actively transform the > content of messages because the messages are self-descriptive and > their semantics are visible to intermediaries" > > Cheers, > Mike >
Jan, assuming I have followed a link with the rel-type "at-this-url-you-will-find-a-sales-order". Are you saying that the server could return *anything*, like for instance the representation of a car or an address book? Or are you allowing the client to assume it is a sales order - without any assumption of the actual format it is represented in? /Jørn --- In rest-discuss@yahoogroups.com, Jan Algermissen <jan.algermissen@...> wrote: > > > On Dec 30, 2011, at 3:35 AM, Glenn Block wrote: > > > Jan > > > > If the link relation is documented / registered, where is the harm if it says in the documentation these are supported media types (not necessarily being the exclusive set) and this is how the data will look for those types? > > Coupling. The server might change and send something different than you expect. In order to facilitate fragmented change of system components REST deliberately rules out such coupling. If clients and servers were coupled around such expectations, how could a server ever change without informing all clients? > > Conneg is the mechanism that allows for capability negotiation at runtime (instead of at design time with the inevitable coupling) and besides that the burden really is on the client to deal with *any* response in a sensible way. IOW, you cannot expect the server to behave in a certain, application specific way. HTTP *is* the application protocol already. There is no next layer. > > > See my initial question and Roy's reply in this thread: > > http://www.imc.org/atom-protocol/mail-archive/msg11463.html > > http://www.imc.org/atom-protocol/mail-archive/msg11487.html > > Jan > > > > > > On Thu, Dec 29, 2011 at 4:26 PM, Mike Kelly <mike@...> wrote: > > On Thu, Dec 29, 2011 at 11:44 PM, Jan Algermissen > > <jan.algermissen@...> wrote: > > > > > > On Dec 29, 2011, at 8:37 PM, Mike Kelly wrote: > > > > > >> Yes, the link relation of the referring link should be documented so the client can interpret the targeted resource. > > >> > > >> Is that controversial? :) > > > > > > No - REST's constraints explicitly forbid this form of coupling. The client interprets the response based on the current response. NOT based on expectations about the resource. > > > > > > > I disagree, which constraint are you claiming is explicit about this? > > Are there any specific quotes from the dissertation you are thinking > > about? > > > > > > > > Expectations about resources (made due to discovered link semantics) drive the construction of the request. They do not apply to the response (aka 'self describing messages constraint'). > > > > > > > The purpose of self-descriptiveness is to enable intermediate > > processing, so why do you insist application semantics in the message > > body must be processable by intermediaries? > > > > In fact, the kind of semantics that should be self-descriptive in the > > message body are general mechanisms like ESI (not application > > semantics). Incidentally, a capability I'm looking work into HAL is > > something equivalent to ESI - for that very reason. > > > > couple of relevant quotes from the dissertation: > > > > "REST enables intermediate processing by constraining messages to be > > self-descriptive" > > > > "Within REST, intermediary components can actively transform the > > content of messages because the messages are self-descriptive and > > their semantics are visible to intermediaries" > > > > Cheers, > > Mike > > >
> If I say: > > Content-Type: application/xml > > What did I tell you? Depending on your point of view on rel-types, then, if I followed a link with rel-type = "you-can-get-the-sales-order-here" I would know that I got a sales order encoded in XML. Then I could look into the XML and check whatever root-element /namespaces it used and switch on that. At least this behavior is encourage by Roy's comment in http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven#comment-753 (last sentence): "What I look for are requirements on processing behavior that are defined outside of the media type specification. One of the easiest ways to see that is when a protocol calls for the use of a generic media type (like application/xml or application/json) and then requires that it be processed in a way that is special to the protocol/API. If they are keying off of something unique within the content (like an XML namespace declaration that extends the semantics of a generic type), then it's okay." In the end it all boils down to the interpretation of link-rel-types and media types - which one tells the world how to interpret the semantic meaning of a resource? /Jørn
On Dec 30, 2011, at 11:28 AM, Jorn Wildt wrote:
> Jan, assuming I have followed a link with the rel-type "at-this-url-you-will-find-a-sales-order". Are you saying that the server could return *anything*, like for instance the representation of a car or an address book? Or are you allowing the client to assume it is a sales order - without any assumption of the actual format it is represented in?
Ha, Jorn! You are right on.
Yes, the latter. There is a (somewhat vague) obligation for services not to change their nature and also, IMHO, to keep the abstract kind of resources stable. E.g. if I bookmark a product link at Amazon and send that link to my friend, I assume it will remain that product (the conceptual mapping should be stable over time). ("Cool URIs don't change"). Otherwise, the whole idea of bookmarking would be moot.
But the representation can vary (and thus evolve) and conneg (late binding of capabilities) will solve this at runtime.
BTW, this is a very interesting aspect of REST: it is constraining over time - no other arch. style does that. And you can say that REST does not achieve evolvability by being lax - indeed REST achives evolvability by being particularly constraining. - but in the right spots.
Jan
>
> /Jørn
>
> --- In rest-discuss@yahoogroups.com, Jan Algermissen <jan.algermissen@...> wrote:
> >
> >
> > On Dec 30, 2011, at 3:35 AM, Glenn Block wrote:
> >
> > > Jan
> > >
> > > If the link relation is documented / registered, where is the harm if it says in the documentation these are supported media types (not necessarily being the exclusive set) and this is how the data will look for those types?
> >
> > Coupling. The server might change and send something different than you expect. In order to facilitate fragmented change of system components REST deliberately rules out such coupling. If clients and servers were coupled around such expectations, how could a server ever change without informing all clients?
> >
> > Conneg is the mechanism that allows for capability negotiation at runtime (instead of at design time with the inevitable coupling) and besides that the burden really is on the client to deal with *any* response in a sensible way. IOW, you cannot expect the server to behave in a certain, application specific way. HTTP *is* the application protocol already. There is no next layer.
> >
> >
> > See my initial question and Roy's reply in this thread:
> >
> > http://www.imc.org/atom-protocol/mail-archive/msg11463.html
> >
> > http://www.imc.org/atom-protocol/mail-archive/msg11487.html
> >
> > Jan
> >
> >
> > >
> > > On Thu, Dec 29, 2011 at 4:26 PM, Mike Kelly <mike@...> wrote:
> > > On Thu, Dec 29, 2011 at 11:44 PM, Jan Algermissen
> > > <jan.algermissen@...> wrote:
> > > >
> > > > On Dec 29, 2011, at 8:37 PM, Mike Kelly wrote:
> > > >
> > > >> Yes, the link relation of the referring link should be documented so the client can interpret the targeted resource.
> > > >>
> > > >> Is that controversial? :)
> > > >
> > > > No - REST's constraints explicitly forbid this form of coupling. The client interprets the response based on the current response. NOT based on expectations about the resource.
> > > >
> > >
> > > I disagree, which constraint are you claiming is explicit about this?
> > > Are there any specific quotes from the dissertation you are thinking
> > > about?
> > >
> > > >
> > > > Expectations about resources (made due to discovered link semantics) drive the construction of the request. They do not apply to the response (aka 'self describing messages constraint').
> > > >
> > >
> > > The purpose of self-descriptiveness is to enable intermediate
> > > processing, so why do you insist application semantics in the message
> > > body must be processable by intermediaries?
> > >
> > > In fact, the kind of semantics that should be self-descriptive in the
> > > message body are general mechanisms like ESI (not application
> > > semantics). Incidentally, a capability I'm looking work into HAL is
> > > something equivalent to ESI - for that very reason.
> > >
> > > couple of relevant quotes from the dissertation:
> > >
> > > "REST enables intermediate processing by constraining messages to be
> > > self-descriptive"
> > >
> > > "Within REST, intermediary components can actively transform the
> > > content of messages because the messages are self-descriptive and
> > > their semantics are visible to intermediaries"
> > >
> > > Cheers,
> > > Mike
> > >
> >
>
>
> > Jan, assuming I have followed a link with the rel-type "at-this-url-you-will-find-a-sales-order". Are you saying that the server could return *anything*, like for instance the representation of a car or an address book? Or are you allowing the client to assume it is a sales order - without any assumption of the actual format it is represented in? > > Ha, Jorn! You are right on. So, I would say, it is fair to argue that: 1) The client follows a link-rel "this-is-the-sales-order" and assumes it will get, well, a sales order - in some yet unknown format. 2) The client does not need to specify "I want a sales order" in the accept headers (it does so by the URL it has selected). 3) The client should specify what general domain-independent format it expects using accept headers. For instance "Accept: application/xml". 4) The server replies with a sales order formated as application/xml. 5) The client looks into the XML and switches on something relevant in the XML - for instance the root namespace. 6) Based on the XML decision in (5) the client decides how to decode the XML-formated sales order. All in all there is no need for a domain specific media type like application/vnd.salesorder+xml. Combine this with statements like "REST is software design on the scale of decades" and "Many of the constraints are directly opposed to short-term efficiency". I would say that domain specific media types are short term solutions - a thing like the sales order varies on the market/geographical region/politics and ... time, meaning it probably won't be stable over the decades. Further more - it may feel like bending over and backwards not to use domain specific media types. But, well "Many of the constraints are directly opposed to short-term efficiency", so, yes, it hurts now, but you will be a happy camper in ten years :-) /Jørn
Having said this doesn't necessarily mean that I agree 100% with it myself. What if I do not expect my application to live for decades? Then I wouldn't mind relaxing the constraints a bit. For instance by using a domain specific media type like application/vnd.salesorder+xml. Or if it is inside an enterprise or similar where we have a bit more control over who is using the API. Then again we can relax the constraints a bit and use domain specific media types (as has already been agreed upon in this list some time ago). And so on. By relaxing the constraints we make it easier for ourselves in the short term - but harder in the long term. Whether this is a problem or not must depend on the intended use of the application. I think the biggest problem from here is branding of our APIs. I want to call my API a REST API because people will then know what I am talking about. But strictly speaking I cannot do that since I coded it for the time frame of a few years - not decades. What do I then do? Maybe Facebook got it right: pick a different name, "Graph API" or "Linked Data API", and be big enough to brand that as something better or equal to REST. Happy new year to all of you :-) /Jørn
Jorn: interesting summary, i will volunteer my own POV here, too. first, see my previous post in this thread about the reasons i use when deciding to design a new media type[1]. note that, for me "design" includes the possibility of defining acoherent set of domain-specific decorators for an existing domain-agnostic media type (i.e. @id, @name, @class @rel for XHTML, etc.). second, when i design my own media types i my work covers not just data details but also 1) workflow identifiers to successfully express application flow options within a message, and 2) in some cases new protocol-level affordances to offer "better" maps between domain-specific actions and the target protocol (HTTP). finally, when i work on media type designs, my goal is not to improve one's ability to express selected "objects" (sales-order, customer, etc.) but, instead, the goal is to improve one's ability to express the _entire_ problem domain (sales management, accounting, banking, etc.). these are remarks based on my own explorations and recent experiences. YMMV and others may have verry different perspectives on how they architect and implement solutions over HTTP with or without following Fielding's single example. [1] http://tech.groups.yahoo.com/group/rest-discuss/message/18190 mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Fri, Dec 30, 2011 at 17:04, Jorn Wildt <jw@fjeldgruppen.dk> wrote: >> > Jan, assuming I have followed a link with the rel-type "at-this-url-you-will-find-a-sales-order". Are you saying that the server could return *anything*, like for instance the representation of a car or an address book? Or are you allowing the client to assume it is a sales order - without any assumption of the actual format it is represented in? >> >> Ha, Jorn! You are right on. > > > So, I would say, it is fair to argue that: > > 1) The client follows a link-rel "this-is-the-sales-order" and assumes it will get, well, a sales order - in some yet unknown format. > > 2) The client does not need to specify "I want a sales order" in the accept headers (it does so by the URL it has selected). > > 3) The client should specify what general domain-independent format it expects using accept headers. For instance "Accept: application/xml". > > 4) The server replies with a sales order formated as application/xml. > > 5) The client looks into the XML and switches on something relevant in the XML - for instance the root namespace. > > 6) Based on the XML decision in (5) the client decides how to decode the XML-formated sales order. > > All in all there is no need for a domain specific media type like application/vnd.salesorder+xml. > > Combine this with statements like "REST is software design on the scale of decades" and "Many of the constraints are directly opposed to short-term efficiency". > > I would say that domain specific media types are short term solutions - a thing like the sales order varies on the market/geographical region/politics and ... time, meaning it probably won't be stable over the decades. > > Further more - it may feel like bending over and backwards not to use domain specific media types. But, well "Many of the constraints are directly opposed to short-term efficiency", so, yes, it hurts now, but you will be a happy camper in ten years :-) > > /Jørn > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > >
Jorn: Yes, it is important to each of us to 1 - Find the set of architecture properties of key interest[1] 2 - Formulate an active approach to defining our an architecture that induces these properties[2] 3 - Then, using the collective knowledge of already existing styles [3] 4 - Develop a style that meets the needs of the identified problem domain This process is the actual topic of Fielding's dissertation. "REST" is just his example "step 4" from above. Whether you are working at Facebook, Google, Yahoo, or "MySoftwareConsultancy.com" these steps are, IMO, the key to a successful implementation that meets both your immediate and long-term operational needs. Fielding provides us w/ a roadmap and a single _example_ derived from that particular roadmap. It is up to each of us to either follow his map or create our own and, in the end, accept responsibility for the architectures we implement. "[C]onsider how often we see software projects begin with adoption of the latest fad in architectural design, and only later discover whether or not the system requirements call for such an architecture. Design-by-buzzword is a common occurrence. At least some of this behavior within the software industry is due to a lack of understanding of why a given set of architectural constraints is useful. In other words, the reasoning behind good software architectures is not apparent to designers when those architectures are selected for reuse."[4] Best of luck in 2012 [1] http://www.ics.uci.edu/~fielding/pubs/dissertation/net_app_arch.htm#sec_2_3 [2] http://www.ics.uci.edu/~fielding/pubs/dissertation/web_arch_domain.htm#sec_4_3 [3] http://www.ics.uci.edu/~fielding/pubs/dissertation/net_arch_styles.htm#sec_3_1 [4] http://www.ics.uci.edu/~fielding/pubs/dissertation/introduction.htm On Fri, Dec 30, 2011 at 17:22, Jorn Wildt <jw@...> wrote: > Having said this doesn't necessarily mean that I agree 100% with it myself. > > What if I do not expect my application to live for decades? Then I wouldn't mind relaxing the constraints a bit. For instance by using a domain specific media type like application/vnd.salesorder+xml. > > Or if it is inside an enterprise or similar where we have a bit more control over who is using the API. Then again we can relax the constraints a bit and use domain specific media types (as has already been agreed upon in this list some time ago). > > And so on. By relaxing the constraints we make it easier for ourselves in the short term - but harder in the long term. Whether this is a problem or not must depend on the intended use of the application. > > I think the biggest problem from here is branding of our APIs. I want to call my API a REST API because people will then know what I am talking about. But strictly speaking I cannot do that since I coded it for the time frame of a few years - not decades. What do I then do? > > Maybe Facebook got it right: pick a different name, "Graph API" or "Linked Data API", and be big enough to brand that as something better or equal to REST. > > Happy new year to all of you :-) > > /Jørn > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Dec 30, 2011, at 11:04 PM, Jorn Wildt wrote: > > > Jan, assuming I have followed a link with the rel-type "at-this-url-you-will-find-a-sales-order". Are you saying that the server could return *anything*, like for instance the representation of a car or an address book? Or are you allowing the client to assume it is a sales order - without any assumption of the actual format it is represented in? > > > > Ha, Jorn! You are right on. > > So, I would say, it is fair to argue that: > > 1) The client follows a link-rel "this-is-the-sales-order" and assumes it will get, well, a sales order - in some yet unknown format. Yes > > 2) The client does not need to specify "I want a sales order" in the accept headers (it does so by the URL it has selected). Well - the user agent has capabilities and some next stuff to do (reflecting the intent of the user; e.g. display the content of a feed or embed an image into the currently rendered page or check the price of an article in a catalog). The piece of code that does this relies on some format (e.g. an Atom feed document or some catalog document). This has to go into the Accept header: "I can do what I am up to, if you send me foo/bar". (And I *cannot* do what I am up to if you send me anything else) > 3) The client should specify what general domain-independent format it expects using accept headers. For instance "Accept: application/xml". No! Because Accept: application/xml means "I can do what I want to do if you send me anything that an XML parser will process". How can capability negotiation happen correctly if you say: give me any angle brackets? > > 4) The server replies with a sales order formated as application/xml. > > 5) The client looks into the XML and switches on something relevant in the XML - for instance the root namespace. And if it cannot handle that XML? You tunneled application semantics over HTTP, using HTTP as transport, not transfer. Leading to a 200 instead of 406 at the protocol level. tricking caches into treating it as a good response . Etc. > > 6) Based on the XML decision in (5) the client decides how to decode the XML-formated sales order. > > All in all there is no need for a domain specific media type like application/vnd.salesorder+xml. Then, please, tell me why we have text/html, application/atom+xml, application/vcard+xml, application/calendar+xml, etc. in the first place? Is that all unnecessary, wasted effort? Could the same be achieved by just saying application/xml? BTW, tell me whether <html/> is text/html or application/xslt+xml - you can't. > > Combine this with statements like "REST is software design on the scale of decades" and "Many of the constraints are directly opposed to short-term efficiency". > > I would say that domain specific media types are short term solutions - a thing like the sales order varies on the market/geographical region/politics and ... time, meaning it probably won't be stable over the decades. > > Further more - it may feel like bending over and backwards not to use domain specific media types. But, well "Many of the constraints are directly opposed to short-term efficiency", so, yes, it hurts now, but you will be a happy camper in ten years :-) > Sorry, but this is again pretending there is a problem that needs to be solved - but there isn't. Jan > /Jørn > >
On Dec 30, 2011, at 10:43 AM, Mike Kelly wrote: > On Fri, Dec 30, 2011 at 9:35 AM, Jan Algermissen > <jan.algermissen@...> wrote: >> >> On Dec 30, 2011, at 10:28 AM, Jørn Wildt wrote: >> >>> >>> I would say that root urls/entry points are the elements that are actually documented "out of band". If I decide to use Amazon's webservice then they publish their root url and tell me what to expect there - and I would be rather suprised if it returned fish&chips instead of a bookshop (but, then again, it is Amazon ...:-) >> >> >>> >>> If the root url returns HTML then that could contain a nice description of how to use exactly that webservce ... actually that's already happening isn't it: I haven't looked for it, but Amazon sure has a public HTML page that documents the URLs of their webservice. >>> >>> It requires a slightly different view on the solution - but you could in principle consider thier documentation page as THE root url of their webservice. Right? >> >> If you use such a description as deign time information, you are creating coupling. If you process such information at run time (aka 'forms') that's fine. >> >> (Since you cannot process Amazon's HTML API descriptions automatically at runtime (they are just not design for that purpose) you will inevitably create a non RESTful application when interacting with Amazon. IOW: Amazon's Web services are not RESTful. Not a bit. >> >> (Using a specific media type instead of an HTML Api description would have fixed that) > > Are you sure about that? > > "A REST API should be entered with no prior knowledge beyond the > initial URI (bookmark) and set of standardized media types that are > appropriate for the intended audience." -- Roy Fielding Yes, very much so. Your quote essentially says just that, eh? Jan
On Fri, Dec 30, 2011 at 2:44 PM, Jan Algermissen <jan.algermissen@... > wrote: > > Sorry, but this is again pretending there is a problem that needs to be > solved - but there isn't. > > Oh, there's a problem to be solved all right (mapping from specified media type to the semantic meaning). But the "new media types are evil" meme just declares that problem to be an "exercise left for the reader" from the perspective of the REST architectural pattern. Which is too bad, because the existing machinery could be used to solve that problem as well, without giving up the long term advantages. > Jan > Craig
On Dec 30, 2011, at 10:46 AM, Paul Cohen wrote:
> On Thu, Dec 29, 2011 at 4:15 PM, Jan Algermissen
> <jan.algermissen@...> wrote:
> > On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote:
> > > However my understanding of Media Type as defined in RFC 2046
> > > (http://tools.ietf.org/html/rfc2046) is that it is to be used to
> > > identify media format types or representation format types, and not
> > > conceptual types.
> >
> > There simply is no notion of 'conceptual type' in REST.
>
> Errmh, no. I didn't say that either. I'm saying there are design and
> implementation aspects of system interfaces that are not covered by
> HTTP. At some level developers (human beings) need to communicate and
> reason about the software they write. Every web service provides
> information about something. Apart from the practical integration of a
> client with a server we need to be able to discuss the "something"
> rationale behind a given service.
No. You discuss the rationale behind service 'kinds' and that is part of setting up the media type. The kind 'feed server' is (implicitly I guess) defined in the media type spec. You do not talk about the AtomPub service X of organization Y. All you need to know is that it is 'a feed server' and then you say Accept: application/atom+xml, application/rss+xml and off you go. There is no need to talk about *that* service any further. The description of the service *is* in the media type. There are *no* (thatis: none whatsoever) service specific descriptions in RESTful systems.
> Otherwise I as a developer won't
> know if the information of a given service is of interest to me. This
> reasoning between developers (human beings) is at a conceptual level.
> Furthermore software is not only meant for cumputers, it's also meant
> for humans to read and understand and reason about.
Right - but this is intent ("I want to buy a book, therefore I direct my browser to http://amazon.de and not http://weather.info). No architectural style will help to ensure the Amazon is still selling books tomorrow.
>
> Maybe the term "conceptual type" was unfortunate. My point in the
> discussion was that it may be of interest to talk about the concepts
> and information a service is meant to provide in order to then be able
> to reason about what media types to use or invent for a given service.
Well, 'entities' such as feeds, feed-entries, images, products, orders, contact-info (vcard), events (icalendar) etc. surely are part of designing media types.
>
> > Let me say it again: the problem that is being tried to solve does not exist.
>
> Is this your way of saying there is nothing to discuss?
Yes :-) The whole strict point of 'specific media types are a bad idea' is simply confusing people trying to understand REST. Maybe the discussion is useful after all, though.
> Or are you
> saying there is no problem in deciding whether to define new media
> types or not? My understanding of the discussion was that we were
> discussing heuristics for inventing (or not inventing) new media
> types.
The thing is, that it is actually pretty clear that generic media types + embedded specific stuff + a new means for negotiating that stuff is silly because the mechanism for negotiation such stuff is the media type identifier in the first place. It is the mechanism built into HTTP for that purpose.
(Not to question the usefulness of standard general link relations orthogonal to media types, of course)
Specific media types is what one should do and there is no problem with them. Yet, some people make it sound as if there is a problem - and this I find is adding confusion for others that try to learn REST.
Jan
>
> /Paul
>
> --
> Paul Cohen
> www.seibostudios.se
> mobile: +46 730 787 035
> e-mail: paul.cohen@...
>
Craig: <snip> Oh, there's a problem to be solved all right (mapping from specified media type to the semantic meaning). But the "new media types are evil" meme just declares that problem to be an "exercise left for the reader" from the perspective of the REST architectural pattern. </snip> [with self-promotion hat on] Building Hypermedia APIs with HTML5 and Node "This book’s primary focus is on designing hypermedia APIs. That may seem a bit strange to some readers. There are many books on programming languages, data storage systems, web frameworks, etc. This is not one of those books. Instead, this book covers the nature of the messages passed between client and server, and how to improve the content and value of those messages. ... "This book is an attempt to improve the chances that new APIs added to the WWW will be easier to use and maintain over time, and that they will take their cue from those who were responsible for the discovery of the value of hypermedia linking; the codification of the HTTP protocol; and the implementation of HTML, Atom/AtomPub, and other native hypermedia formats that still drive the growth of the web today." http://my.safaribooksonline.com/book/-/9781449309497 mca http://amundsen.com/blog/ http://twitter.com@mamund http://mamund.com/foaf.rdf#me On Fri, Dec 30, 2011 at 18:00, Craig McClanahan <craigmcc@...> wrote: > > > On Fri, Dec 30, 2011 at 2:44 PM, Jan Algermissen < > jan.algermissen@...> wrote: > >> >> Sorry, but this is again pretending there is a problem that needs to be >> solved - but there isn't. >> >> Oh, there's a problem to be solved all right (mapping from specified > media type to the semantic meaning). But the "new media types are evil" > meme just declares that problem to be an "exercise left for the reader" > from the perspective of the REST architectural pattern. > > Which is too bad, because the existing machinery could be used to solve > that problem as well, without giving up the long term advantages. > > > >> Jan >> > > Craig > > > >
Really good book. On Fri, Dec 30, 2011 at 3:20 PM, mike amundsen <mamund@yahoo.com> wrote: > ** > > > Craig: > > <snip> > Oh, there's a problem to be solved all right (mapping from specified media > type to the semantic meaning). But the "new media types are evil" meme > just declares that problem to be an "exercise left for the reader" from the > perspective of the REST architectural pattern. > </snip> > > [with self-promotion hat on] > > Building Hypermedia APIs with HTML5 and Node > "This book’s primary focus is on designing hypermedia APIs. That may seem > a bit strange to some readers. There are many books on programming > languages, data storage systems, web frameworks, etc. This is not one of > those books. Instead, this book covers the nature of the messages passed > between client and server, and how to improve the content and value of > those messages. > ... > "This book is an attempt to improve the chances that new APIs added to > the WWW will be easier to use and maintain over time, and that they will > take their cue from those who were responsible for the discovery of the > value of hypermedia linking; the codification of the HTTP protocol; and the > implementation of HTML, Atom/AtomPub, and other native hypermedia formats > that still drive the growth of the web today." > > http://my.safaribooksonline.com/book/-/9781449309497 > > > mca > http://amundsen.com/blog/ > http://twitter.com@mamund > http://mamund.com/foaf.rdf#me > > > > > On Fri, Dec 30, 2011 at 18:00, Craig McClanahan <craigmcc@...>wrote: > >> >> >> On Fri, Dec 30, 2011 at 2:44 PM, Jan Algermissen < >> jan.algermissen@...> wrote: >> >>> >>> Sorry, but this is again pretending there is a problem that needs to be >>> solved - but there isn't. >>> >>> Oh, there's a problem to be solved all right (mapping from specified >> media type to the semantic meaning). But the "new media types are evil" >> meme just declares that problem to be an "exercise left for the reader" >> from the perspective of the REST architectural pattern. >> >> Which is too bad, because the existing machinery could be used to solve >> that problem as well, without giving up the long term advantages. >> >> >> >>> Jan >>> >> >> Craig >> >> >> >> > >
Personally, if there was no concern over minting new types, I would opt the
minting model. It allows a very simple model for clients and servers to
negotiate what they want without adding other complexities.
It seems like however there is a big concern over the minting of new types
which is why this conversation is even happening.
On Fri, Dec 30, 2011 at 3:09 PM, Jan Algermissen <jan.algermissen@...
> wrote:
>
> On Dec 30, 2011, at 10:46 AM, Paul Cohen wrote:
>
> > On Thu, Dec 29, 2011 at 4:15 PM, Jan Algermissen
> > <jan.algermissen@...> wrote:
> > > On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote:
> > > > However my understanding of Media Type as defined in RFC 2046
> > > > (http://tools.ietf.org/html/rfc2046) is that it is to be used to
> > > > identify media format types or representation format types, and not
> > > > conceptual types.
> > >
> > > There simply is no notion of 'conceptual type' in REST.
> >
> > Errmh, no. I didn't say that either. I'm saying there are design and
> > implementation aspects of system interfaces that are not covered by
> > HTTP. At some level developers (human beings) need to communicate and
> > reason about the software they write. Every web service provides
> > information about something. Apart from the practical integration of a
> > client with a server we need to be able to discuss the "something"
> > rationale behind a given service.
>
> No. You discuss the rationale behind service 'kinds' and that is part of
> setting up the media type. The kind 'feed server' is (implicitly I guess)
> defined in the media type spec. You do not talk about the AtomPub service X
> of organization Y. All you need to know is that it is 'a feed server' and
> then you say Accept: application/atom+xml, application/rss+xml and off you
> go. There is no need to talk about *that* service any further. The
> description of the service *is* in the media type. There are *no* (thatis:
> none whatsoever) service specific descriptions in RESTful systems.
>
>
> > Otherwise I as a developer won't
> > know if the information of a given service is of interest to me. This
> > reasoning between developers (human beings) is at a conceptual level.
> > Furthermore software is not only meant for cumputers, it's also meant
> > for humans to read and understand and reason about.
>
> Right - but this is intent ("I want to buy a book, therefore I direct my
> browser to http://amazon.de and not http://weather.info). No
> architectural style will help to ensure the Amazon is still selling books
> tomorrow.
>
> >
> > Maybe the term "conceptual type" was unfortunate. My point in the
> > discussion was that it may be of interest to talk about the concepts
> > and information a service is meant to provide in order to then be able
> > to reason about what media types to use or invent for a given service.
>
> Well, 'entities' such as feeds, feed-entries, images, products, orders,
> contact-info (vcard), events (icalendar) etc. surely are part of designing
> media types.
>
> >
> > > Let me say it again: the problem that is being tried to solve does not
> exist.
> >
> > Is this your way of saying there is nothing to discuss?
>
> Yes :-) The whole strict point of 'specific media types are a bad idea' is
> simply confusing people trying to understand REST. Maybe the discussion is
> useful after all, though.
>
> > Or are you
> > saying there is no problem in deciding whether to define new media
> > types or not? My understanding of the discussion was that we were
> > discussing heuristics for inventing (or not inventing) new media
> > types.
>
> The thing is, that it is actually pretty clear that generic media types +
> embedded specific stuff + a new means for negotiating that stuff is silly
> because the mechanism for negotiation such stuff is the media type
> identifier in the first place. It is the mechanism built into HTTP for that
> purpose.
> (Not to question the usefulness of standard general link relations
> orthogonal to media types, of course)
>
> Specific media types is what one should do and there is no problem with
> them. Yet, some people make it sound as if there is a problem - and this I
> find is adding confusion for others that try to learn REST.
>
> Jan
>
> >
> > /Paul
> >
> > --
> > Paul Cohen
> > www.seibostudios.se
> > mobile: +46 730 787 035
> > e-mail: paul.cohen@...
> >
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
On Dec 31, 2011, at 12:22 AM, Glenn Block wrote:
> Personally, if there was no concern over minting new types, I would opt the minting model. It allows a very simple model for clients and servers to negotiate what they want without adding other complexities.
>
> It seems like however there is a big concern over the minting of new types which is why this conversation is even happening.
Fair enough - but what exactly is that concern?
My impression is that the origin (of the concern) is a lack of understanding of REST and I try to help make people understand in order to solve the concern. That is always better IMHO than to change the architecture to match a lack of understanding (which is the very reason why SOAP exists, for example :-).
Jan
>
>
>
> On Fri, Dec 30, 2011 at 3:09 PM, Jan Algermissen <jan.algermissen@...> wrote:
>
> On Dec 30, 2011, at 10:46 AM, Paul Cohen wrote:
>
> > On Thu, Dec 29, 2011 at 4:15 PM, Jan Algermissen
> > <jan.algermissen@...> wrote:
> > > On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote:
> > > > However my understanding of Media Type as defined in RFC 2046
> > > > (http://tools.ietf.org/html/rfc2046) is that it is to be used to
> > > > identify media format types or representation format types, and not
> > > > conceptual types.
> > >
> > > There simply is no notion of 'conceptual type' in REST.
> >
> > Errmh, no. I didn't say that either. I'm saying there are design and
> > implementation aspects of system interfaces that are not covered by
> > HTTP. At some level developers (human beings) need to communicate and
> > reason about the software they write. Every web service provides
> > information about something. Apart from the practical integration of a
> > client with a server we need to be able to discuss the "something"
> > rationale behind a given service.
>
> No. You discuss the rationale behind service 'kinds' and that is part of setting up the media type. The kind 'feed server' is (implicitly I guess) defined in the media type spec. You do not talk about the AtomPub service X of organization Y. All you need to know is that it is 'a feed server' and then you say Accept: application/atom+xml, application/rss+xml and off you go. There is no need to talk about *that* service any further. The description of the service *is* in the media type. There are *no* (thatis: none whatsoever) service specific descriptions in RESTful systems.
>
>
> > Otherwise I as a developer won't
> > know if the information of a given service is of interest to me. This
> > reasoning between developers (human beings) is at a conceptual level.
> > Furthermore software is not only meant for cumputers, it's also meant
> > for humans to read and understand and reason about.
>
> Right - but this is intent ("I want to buy a book, therefore I direct my browser to http://amazon.de and not http://weather.info). No architectural style will help to ensure the Amazon is still selling books tomorrow.
>
> >
> > Maybe the term "conceptual type" was unfortunate. My point in the
> > discussion was that it may be of interest to talk about the concepts
> > and information a service is meant to provide in order to then be able
> > to reason about what media types to use or invent for a given service.
>
> Well, 'entities' such as feeds, feed-entries, images, products, orders, contact-info (vcard), events (icalendar) etc. surely are part of designing media types.
>
> >
> > > Let me say it again: the problem that is being tried to solve does not exist.
> >
> > Is this your way of saying there is nothing to discuss?
>
> Yes :-) The whole strict point of 'specific media types are a bad idea' is simply confusing people trying to understand REST. Maybe the discussion is useful after all, though.
>
> > Or are you
> > saying there is no problem in deciding whether to define new media
> > types or not? My understanding of the discussion was that we were
> > discussing heuristics for inventing (or not inventing) new media
> > types.
>
> The thing is, that it is actually pretty clear that generic media types + embedded specific stuff + a new means for negotiating that stuff is silly because the mechanism for negotiation such stuff is the media type identifier in the first place. It is the mechanism built into HTTP for that purpose.
> (Not to question the usefulness of standard general link relations orthogonal to media types, of course)
>
> Specific media types is what one should do and there is no problem with them. Yet, some people make it sound as if there is a problem - and this I find is adding confusion for others that try to learn REST.
>
> Jan
>
> >
> > /Paul
> >
> > --
> > Paul Cohen
> > www.seibostudios.se
> > mobile: +46 730 787 035
> > e-mail: paul.cohen@...
> >
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
Well I don't really have one :-)
I think it's proliferation of media types. I can see real value of not
continually reinventing the weel and at least minting some standard domain
specific types like for procurement. I always go back to the vcard example
which although it has no real hypermedia, is very useful as a general way
to have systems exchange information (if it happens to be contacts).
On Fri, Dec 30, 2011 at 3:32 PM, Jan Algermissen <jan.algermissen@...
> wrote:
>
> On Dec 31, 2011, at 12:22 AM, Glenn Block wrote:
>
> > Personally, if there was no concern over minting new types, I would opt
> the minting model. It allows a very simple model for clients and servers to
> negotiate what they want without adding other complexities.
> >
> > It seems like however there is a big concern over the minting of new
> types which is why this conversation is even happening.
>
> Fair enough - but what exactly is that concern?
>
>
> My impression is that the origin (of the concern) is a lack of
> understanding of REST and I try to help make people understand in order to
> solve the concern. That is always better IMHO than to change the
> architecture to match a lack of understanding (which is the very reason why
> SOAP exists, for example :-).
>
>
>
> Jan
>
>
> >
> >
> >
> > On Fri, Dec 30, 2011 at 3:09 PM, Jan Algermissen <
> jan.algermissen@...> wrote:
> >
> > On Dec 30, 2011, at 10:46 AM, Paul Cohen wrote:
> >
> > > On Thu, Dec 29, 2011 at 4:15 PM, Jan Algermissen
> > > <jan.algermissen@...> wrote:
> > > > On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote:
> > > > > However my understanding of Media Type as defined in RFC 2046
> > > > > (http://tools.ietf.org/html/rfc2046) is that it is to be used to
> > > > > identify media format types or representation format types, and not
> > > > > conceptual types.
> > > >
> > > > There simply is no notion of 'conceptual type' in REST.
> > >
> > > Errmh, no. I didn't say that either. I'm saying there are design and
> > > implementation aspects of system interfaces that are not covered by
> > > HTTP. At some level developers (human beings) need to communicate and
> > > reason about the software they write. Every web service provides
> > > information about something. Apart from the practical integration of a
> > > client with a server we need to be able to discuss the "something"
> > > rationale behind a given service.
> >
> > No. You discuss the rationale behind service 'kinds' and that is part of
> setting up the media type. The kind 'feed server' is (implicitly I guess)
> defined in the media type spec. You do not talk about the AtomPub service X
> of organization Y. All you need to know is that it is 'a feed server' and
> then you say Accept: application/atom+xml, application/rss+xml and off you
> go. There is no need to talk about *that* service any further. The
> description of the service *is* in the media type. There are *no* (thatis:
> none whatsoever) service specific descriptions in RESTful systems.
> >
> >
> > > Otherwise I as a developer won't
> > > know if the information of a given service is of interest to me. This
> > > reasoning between developers (human beings) is at a conceptual level.
> > > Furthermore software is not only meant for cumputers, it's also meant
> > > for humans to read and understand and reason about.
> >
> > Right - but this is intent ("I want to buy a book, therefore I direct my
> browser to http://amazon.de and not http://weather.info). No
> architectural style will help to ensure the Amazon is still selling books
> tomorrow.
> >
> > >
> > > Maybe the term "conceptual type" was unfortunate. My point in the
> > > discussion was that it may be of interest to talk about the concepts
> > > and information a service is meant to provide in order to then be able
> > > to reason about what media types to use or invent for a given service.
> >
> > Well, 'entities' such as feeds, feed-entries, images, products, orders,
> contact-info (vcard), events (icalendar) etc. surely are part of designing
> media types.
> >
> > >
> > > > Let me say it again: the problem that is being tried to solve does
> not exist.
> > >
> > > Is this your way of saying there is nothing to discuss?
> >
> > Yes :-) The whole strict point of 'specific media types are a bad idea'
> is simply confusing people trying to understand REST. Maybe the discussion
> is useful after all, though.
> >
> > > Or are you
> > > saying there is no problem in deciding whether to define new media
> > > types or not? My understanding of the discussion was that we were
> > > discussing heuristics for inventing (or not inventing) new media
> > > types.
> >
> > The thing is, that it is actually pretty clear that generic media types
> + embedded specific stuff + a new means for negotiating that stuff is silly
> because the mechanism for negotiation such stuff is the media type
> identifier in the first place. It is the mechanism built into HTTP for that
> purpose.
> > (Not to question the usefulness of standard general link relations
> orthogonal to media types, of course)
> >
> > Specific media types is what one should do and there is no problem with
> them. Yet, some people make it sound as if there is a problem - and this I
> find is adding confusion for others that try to learn REST.
> >
> > Jan
> >
> > >
> > > /Paul
> > >
> > > --
> > > Paul Cohen
> > > www.seibostudios.se
> > > mobile: +46 730 787 035
> > > e-mail: paul.cohen@...
> > >
> >
> >
> >
> > ------------------------------------
> >
> > Yahoo! Groups Links
> >
> >
> >
> >
>
>
Jan, I am quite sure I understand what you are saying :-) You could even argue against me, saying that, well, Jorn, since you have decided to inspect the payload before acting on the response (checking the XML, right) - then you might as well drop the concept of a media type completely and always depend on data-inspection. Something that certainly won't work! I totally agree that switching on domain specific information / capabilities in the media type feels intuitive, makes life easier, and makes HTTP work with you instead of against you. But I am having a seriously hard time deciding on whether or not to do it - what do I gain and, more interesting, what do I loose? So, if we mint new media types - when and for what purpose should we do that? Lets talk about e-procurement again. In this domain we have orders and bills. What media types should we have? Should we have only one media type (like Webber's application/vnd.restbucks+xml in http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/): application/e-procurement+xml or many media types: application/e-procurement.order+xml application/e-procurement.bill+xml application/e-procurement.person+xml As we extend the application we are probably going to need stuff like inventory, reservation, bank account, money transfer and so on. If each of these are going to get their own media type then we end up with the known "over proliferation of media types" as opposed to sticking to a few well known media formats. But, to argue against my previous post - this is an e-procurement REST API - meaning "this is one specific instance of the REST architecture". It happens to work with HTTP which has many more media types, but for this specific REST instance (e-procurement) we actually don't have that many media types. If this is a valid interpretation of REST as an achitecture, and e-procurement as an instance of it, then the "over proliferation of media types" is a non-problem - there won't be that many media types per API / instance of REST. Do I sound scizophrenic? Probably. I feel so. Like running around in circles. Need sleep ... /J�rn
> > Fair enough - but what exactly is that concern?
> I think it's proliferation of media types.
See http://tech.groups.yahoo.com/group/rest-discuss/message/18276. If my
argumentation is valid then "proliferation of media types" is a non-problem.
That leaves the concerns of 1) lack of tooling for new media types, and 2)
re-inventing the wheel. Can we get rid of them? I think so, but now its too
late for more mails :-)
/J�rn
----- Original Message -----
From: Glenn Block
To: Jan Algermissen
Cc: Paul Cohen ; Erik Mogensen ; Mike Kelly ; REST-Discuss Discussion Group
; craigmcc@...
Sent: Saturday, December 31, 2011 12:38 AM
Subject: Re: [rest-discuss] The "new media types are evil" meme
Well I don't really have one :-)
I think it's proliferation of media types. I can see real value of not
continually reinventing the weel and at least minting some standard domain
specific types like for procurement. I always go back to the vcard example
which although it has no real hypermedia, is very useful as a general way to
have systems exchange information (if it happens to be contacts).
On Fri, Dec 30, 2011 at 3:32 PM, Jan Algermissen
<jan.algermissen@...> wrote:
On Dec 31, 2011, at 12:22 AM, Glenn Block wrote:
> Personally, if there was no concern over minting new types, I would opt
> the minting model. It allows a very simple model for clients and servers
> to negotiate what they want without adding other complexities.
>
> It seems like however there is a big concern over the minting of new types
> which is why this conversation is even happening.
Fair enough - but what exactly is that concern?
My impression is that the origin (of the concern) is a lack of understanding
of REST and I try to help make people understand in order to solve the
concern. That is always better IMHO than to change the architecture to match
a lack of understanding (which is the very reason why SOAP exists, for
example :-).
Jan
>
>
>
> On Fri, Dec 30, 2011 at 3:09 PM, Jan Algermissen
> <jan.algermissen@...> wrote:
>
> On Dec 30, 2011, at 10:46 AM, Paul Cohen wrote:
>
> > On Thu, Dec 29, 2011 at 4:15 PM, Jan Algermissen
> > <jan.algermissen@...> wrote:
> > > On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote:
> > > > However my understanding of Media Type as defined in RFC 2046
> > > > (http://tools.ietf.org/html/rfc2046) is that it is to be used to
> > > > identify media format types or representation format types, and not
> > > > conceptual types.
> > >
> > > There simply is no notion of 'conceptual type' in REST.
> >
> > Errmh, no. I didn't say that either. I'm saying there are design and
> > implementation aspects of system interfaces that are not covered by
> > HTTP. At some level developers (human beings) need to communicate and
> > reason about the software they write. Every web service provides
> > information about something. Apart from the practical integration of a
> > client with a server we need to be able to discuss the "something"
> > rationale behind a given service.
>
> No. You discuss the rationale behind service 'kinds' and that is part of
> setting up the media type. The kind 'feed server' is (implicitly I guess)
> defined in the media type spec. You do not talk about the AtomPub service
> X of organization Y. All you need to know is that it is 'a feed server'
> and then you say Accept: application/atom+xml, application/rss+xml and off
> you go. There is no need to talk about *that* service any further. The
> description of the service *is* in the media type. There are *no* (thatis:
> none whatsoever) service specific descriptions in RESTful systems.
>
>
> > Otherwise I as a developer won't
> > know if the information of a given service is of interest to me. This
> > reasoning between developers (human beings) is at a conceptual level.
> > Furthermore software is not only meant for cumputers, it's also meant
> > for humans to read and understand and reason about.
>
> Right - but this is intent ("I want to buy a book, therefore I direct my
> browser to http://amazon.de and not http://weather.info). No architectural
> style will help to ensure the Amazon is still selling books tomorrow.
>
> >
> > Maybe the term "conceptual type" was unfortunate. My point in the
> > discussion was that it may be of interest to talk about the concepts
> > and information a service is meant to provide in order to then be able
> > to reason about what media types to use or invent for a given service.
>
> Well, 'entities' such as feeds, feed-entries, images, products, orders,
> contact-info (vcard), events (icalendar) etc. surely are part of designing
> media types.
>
> >
> > > Let me say it again: the problem that is being tried to solve does not
> > > exist.
> >
> > Is this your way of saying there is nothing to discuss?
>
> Yes :-) The whole strict point of 'specific media types are a bad idea' is
> simply confusing people trying to understand REST. Maybe the discussion is
> useful after all, though.
>
> > Or are you
> > saying there is no problem in deciding whether to define new media
> > types or not? My understanding of the discussion was that we were
> > discussing heuristics for inventing (or not inventing) new media
> > types.
>
> The thing is, that it is actually pretty clear that generic media types +
> embedded specific stuff + a new means for negotiating that stuff is silly
> because the mechanism for negotiation such stuff is the media type
> identifier in the first place. It is the mechanism built into HTTP for
> that purpose.
> (Not to question the usefulness of standard general link relations
> orthogonal to media types, of course)
>
> Specific media types is what one should do and there is no problem with
> them. Yet, some people make it sound as if there is a problem - and this I
> find is adding confusion for others that try to learn REST.
>
> Jan
>
> >
> > /Paul
> >
> > --
> > Paul Cohen
> > www.seibostudios.se
> > mobile: +46 730 787 035
> > e-mail: paul.cohen@...
> >
>
>
>
> ------------------------------------
>
> Yahoo! Groups Links
>
>
>
>
1. What do you mean by "tooling"? I don't see that as necessarily a problem.
2. Re-inventing the wheel, that I do see potentially as a problem but maybe
that is what needs to get addressed. If folks take a generic media type
like say "application/json" which they then later other domain specific
semantics on top, that may reduce proliferation in IANA, but it doesn't
remove the reinventing the wheel as every person can just apply their own
specific semantics on json. You'll still end up with potentially
hundres/thousands of procurement "types" for example.
So then how to you remove the duplication while still minting new media
types? It probably needs some type of standards body which reviews media
type submissions and prevents duplication by encouraging folks to use the
existing procurement media type rather than creating their own. Their
analysis would expose gaps in the existing one to help it evolve and meet
the broader needs. Now of course IANA does exist, but I am not sure they
are/would take on that responsibility.
On Fri, Dec 30, 2011 at 4:20 PM, Jørn Wildt <jw@...> wrote:
> > Fair enough - but what exactly is that concern?
>> I think it's proliferation of media types.
>>
>
> See http://tech.groups.yahoo.com/**group/rest-discuss/message/**18276<http://tech.groups.yahoo.com/group/rest-discuss/message/18276>.
> If my argumentation is valid then "proliferation of media types" is a
> non-problem.
>
> That leaves the concerns of 1) lack of tooling for new media types, and 2)
> re-inventing the wheel. Can we get rid of them? I think so, but now its too
> late for more mails :-)
>
>
> /Jørn
>
> ----- Original Message ----- From: Glenn Block
> To: Jan Algermissen
> Cc: Paul Cohen ; Erik Mogensen ; Mike Kelly ; REST-Discuss Discussion
> Group ; craigmcc@...
> Sent: Saturday, December 31, 2011 12:38 AM
> Subject: Re: [rest-discuss] The "new media types are evil" meme
>
>
>
>
> Well I don't really have one :-)
>
>
> I think it's proliferation of media types. I can see real value of not
> continually reinventing the weel and at least minting some standard domain
> specific types like for procurement. I always go back to the vcard example
> which although it has no real hypermedia, is very useful as a general way
> to have systems exchange information (if it happens to be contacts).
>
>
> On Fri, Dec 30, 2011 at 3:32 PM, Jan Algermissen <
> jan.algermissen@...> wrote:
>
>
> On Dec 31, 2011, at 12:22 AM, Glenn Block wrote:
>
> Personally, if there was no concern over minting new types, I would opt
>> the minting model. It allows a very simple model for clients and servers to
>> negotiate what they want without adding other complexities.
>>
>> It seems like however there is a big concern over the minting of new
>> types which is why this conversation is even happening.
>>
>
>
> Fair enough - but what exactly is that concern?
>
>
> My impression is that the origin (of the concern) is a lack of
> understanding of REST and I try to help make people understand in order to
> solve the concern. That is always better IMHO than to change the
> architecture to match a lack of understanding (which is the very reason why
> SOAP exists, for example :-).
>
>
>
> Jan
>
>
>
>
>>
>>
>> On Fri, Dec 30, 2011 at 3:09 PM, Jan Algermissen <
>> jan.algermissen@...> wrote:
>>
>> On Dec 30, 2011, at 10:46 AM, Paul Cohen wrote:
>>
>> > On Thu, Dec 29, 2011 at 4:15 PM, Jan Algermissen
>> > <jan.algermissen@...> wrote:
>> > > On Dec 29, 2011, at 3:43 PM, Paul Cohen wrote:
>> > > > However my understanding of Media Type as defined in RFC 2046
>> > > > (http://tools.ietf.org/html/**rfc2046<http://tools.ietf.org/html/rfc2046>)
>> is that it is to be used to
>> > > > identify media format types or representation format types, and not
>> > > > conceptual types.
>> > >
>> > > There simply is no notion of 'conceptual type' in REST.
>> >
>> > Errmh, no. I didn't say that either. I'm saying there are design and
>> > implementation aspects of system interfaces that are not covered by
>> > HTTP. At some level developers (human beings) need to communicate and
>> > reason about the software they write. Every web service provides
>> > information about something. Apart from the practical integration of a
>> > client with a server we need to be able to discuss the "something"
>> > rationale behind a given service.
>>
>> No. You discuss the rationale behind service 'kinds' and that is part of
>> setting up the media type. The kind 'feed server' is (implicitly I guess)
>> defined in the media type spec. You do not talk about the AtomPub service X
>> of organization Y. All you need to know is that it is 'a feed server' and
>> then you say Accept: application/atom+xml, application/rss+xml and off you
>> go. There is no need to talk about *that* service any further. The
>> description of the service *is* in the media type. There are *no* (thatis:
>> none whatsoever) service specific descriptions in RESTful systems.
>>
>>
>> > Otherwise I as a developer won't
>> > know if the information of a given service is of interest to me. This
>> > reasoning between developers (human beings) is at a conceptual level.
>> > Furthermore software is not only meant for cumputers, it's also meant
>> > for humans to read and understand and reason about.
>>
>> Right - but this is intent ("I want to buy a book, therefore I direct my
>> browser to http://amazon.de and not http://weather.info). No
>> architectural style will help to ensure the Amazon is still selling books
>> tomorrow.
>>
>> >
>> > Maybe the term "conceptual type" was unfortunate. My point in the
>> > discussion was that it may be of interest to talk about the concepts
>> > and information a service is meant to provide in order to then be able
>> > to reason about what media types to use or invent for a given service.
>>
>> Well, 'entities' such as feeds, feed-entries, images, products, orders,
>> contact-info (vcard), events (icalendar) etc. surely are part of designing
>> media types.
>>
>> >
>> > > Let me say it again: the problem that is being tried to solve does
>> not > > exist.
>> >
>> > Is this your way of saying there is nothing to discuss?
>>
>> Yes :-) The whole strict point of 'specific media types are a bad idea'
>> is simply confusing people trying to understand REST. Maybe the discussion
>> is useful after all, though.
>>
>> > Or are you
>> > saying there is no problem in deciding whether to define new media
>> > types or not? My understanding of the discussion was that we were
>> > discussing heuristics for inventing (or not inventing) new media
>> > types.
>>
>> The thing is, that it is actually pretty clear that generic media types +
>> embedded specific stuff + a new means for negotiating that stuff is silly
>> because the mechanism for negotiation such stuff is the media type
>> identifier in the first place. It is the mechanism built into HTTP for that
>> purpose.
>> (Not to question the usefulness of standard general link relations
>> orthogonal to media types, of course)
>>
>> Specific media types is what one should do and there is no problem with
>> them. Yet, some people make it sound as if there is a problem - and this I
>> find is adding confusion for others that try to learn REST.
>>
>> Jan
>>
>> >
>> > /Paul
>> >
>> > --
>> > Paul Cohen
>> > www.seibostudios.se
>> > mobile: +46 730 787 035
>> > e-mail: paul.cohen@seibostudios.se
>> >
>>
>>
>>
>> ------------------------------**------
>>
>> Yahoo! Groups Links
>>
>>
>>
>>
>>
>
>
>
>
>
Jørn, On Dec 31, 2011, at 1:10 AM, Jørn Wildt wrote: > Jan, I am quite sure I understand what you are saying :-) You could even > argue against me, No, definitely not - I admire the depth of understanding you attempt! > saying that, well, Jorn, since you have decided to inspect > the payload before acting on the response (checking the XML, right) - then > you might as well drop the concept of a media type completely and always > depend on data-inspection. Something that certainly won't work! > > I totally agree that switching on domain specific information / capabilities > in the media type feels intuitive, makes life easier, and makes HTTP work > with you instead of against you. But I am having a seriously hard time > deciding on whether or not to do it - what do I gain and, more interesting, > what do I loose? My approach has always been to radically stick with pure REST (and HTTP as it is), even if I had a hard time seeing the reasons. I used that "REST is correct and sufficient" position to challenge me to work my way backwards and adjust my POV. So far, I have always found that my POV was wrong and once I wend through the necessary mind shift everything made sense. Hence, I never asked "what would I loose?" because I took the success of the Web through over a decade of evolution as proof enough. But then, it is a sensible thing to ask, of course. > > So, if we mint new media types - when and for what purpose should we do > that? Everytime you need to enable communication that cannot be enabled without adding out-of-band knowledge to an existing media type. (With the caveat, that some stuff can really usefully be achieved with a bunch of link relations and/or profiling mechanisms (Mike Amundsen being the expert on this, IMHO). Procurement is in my opinion clearly something that merits minting a new media type (or bunch of types; see below). RESTifying ITIL, too. Financial reporting, news management / publishing, controlling, project management and stuff like that (large busuiness domains) IMHO make good candidates for media types. These types need not be global (registered with IANA) if you use them inside an organization. It is not likely that there will ever be a common model for such things suitable for all organizations. That is no problem though, you get the benefits of REST anyhow, even if you install only-your-enterprise-global media type registries (==owners). Likewise, if you expose services to business partners and customers, it will be better to give them media types you own than use RPC APIs (which the typical HTML described so called Web APIs essentially are). And maybe, your type is so useful or your user base so large (Google, Amazon - are you listening?) that you can make for a de-facto standard. > Lets talk about e-procurement again. In this domain we have orders and > bills. What media types should we have? > > Should we have only one media type (like Webber's > application/vnd.restbucks+xml in > http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/): > > application/e-procurement+xml > > or many media types: > > application/e-procurement.order+xml > application/e-procurement.bill+xml > application/e-procurement.person+xml > Good question. I favor larger types, comprising a set of 'business documents' because they allow describing the 'domain' in a single, connected document. So I'd have application/procurement or maybe spilt sub-domains like sourcing, ordering, billing, fulfillment, transport to allow for the implementation of clients for subdomains without having to implement all of procurement. E.g. there would be great value if all online shops used some applications/souring media type for presenting lists of articles and prices and allowing client-driven requests for quotations etc. If one would have to implement all the stuff related to transport initiation we'd never see any adoption whatsoever let alone a standard in the first place :-) So...it depends :-) > As we extend the application we are probably going to need stuff like > inventory, reservation, bank account, money transfer and so on. If each of > these are going to get their own media type then we end up with the known > "over proliferation of media types" Is that so? Why would it be an "over proliferation" to have about a couple of dozen types for a domain that covers 50% (80%? 90%?) of IT-enabled interactions? Why let the perceived fact of this being an "over proliferation" drive a design decision? (And why is it less of a problem to have this over proliferation of languages elsewhere? - You cannot make it go away anyhow!) > as opposed to sticking to a few well > known media formats. > > But, to argue against my previous post - this is an e-procurement REST API - > meaning "this is one specific instance of the REST architecture". HTTP(the Web) is an instance of REST. What you are talking about is a set of types of data elements - enabling a set of possible applications. > It happens > to work with HTTP which has many more media types, but for this specific > REST instance (e-procurement) we actually don't have that many media types. Wrt terminology: e-procurement refers to a set of perceived/intended applications, not an instance of REST. > > If this is a valid interpretation of REST as an achitecture, REST is an arch. style, not an architecture. HTTP(the Web) is an architecture. > and > e-procurement as an instance of it, then the "over proliferation of media > types" is a non-problem - there won't be that many media types per API / > instance of REST. > > Do I sound scizophrenic? No, you sound like someone who is really trying to apply REST in a context that is not yet covered by existing media types. You are definitely asking the right questions. > Probably. I feel so. Like running around in > circles. Need sleep ... Jan > > /Jørn > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Dec 31, 2011, at 1:33 AM, Glenn Block wrote: > It probably needs some type of standards body which reviews media type submissions and prevents duplication by encouraging folks to use the existing procurement media type rather than creating their own. Their analysis would expose gaps in the existing one to help it evolve and meet the broader needs http://tools.ietf.org/html/rfc4288#section-5.4 Jan
Duh! It is the examples that are wrong. Media types should not describe objects like "sales order" or "bill". They should be more general like "e-procurement". The fact that its a sales order can be derived from the rel-type (as argued earlier on). So my previous little step-be-step list becomes: 1) The client follows a link-rel "this-is-the-sales-order" and assumes it will get, well, a sales order - in some yet unknown format. 2) The client does not need to specify "I want a sales order" in the accept header (it does so by the URL it has selected). 3) The client must specify its capabilities (or the context in which it will process the sales order). So it sends Accept: application/e-procurement+xml (not application/salesorder+xml). 4) The server replies with a sales order and content-type application/e-procurement+xml. 5) The client knows that a sales order in the context of application/e-procurement+xml should be decoded in a certain way and does so. Repeat step 1-5 with "sales order" replaced with "bill/quote/reservation/etc" - but do not change the media type. Better now? /J�rn ----- Original Message ----- From: "J�rn Wildt" <jw@...> To: "Jan Algermissen" <jan.algermissen@...> Cc: "REST Discuss" <rest-discuss@yahoogroups.com> Sent: Saturday, December 31, 2011 1:10 AM Subject: Re: [rest-discuss] Re: The "new media types are evil" meme > Jan, I am quite sure I understand what you are saying :-) You could even > argue against me, saying that, well, Jorn, since you have decided to > inspect > the payload before acting on the response (checking the XML, right) - then > you might as well drop the concept of a media type completely and always > depend on data-inspection. Something that certainly won't work! > > I totally agree that switching on domain specific information / > capabilities > in the media type feels intuitive, makes life easier, and makes HTTP work > with you instead of against you. But I am having a seriously hard time > deciding on whether or not to do it - what do I gain and, more > interesting, > what do I loose? > > So, if we mint new media types - when and for what purpose should we do > that? Lets talk about e-procurement again. In this domain we have orders > and > bills. What media types should we have? > > Should we have only one media type (like Webber's > application/vnd.restbucks+xml in > http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/): > > application/e-procurement+xml > > or many media types: > > application/e-procurement.order+xml > application/e-procurement.bill+xml > application/e-procurement.person+xml > > As we extend the application we are probably going to need stuff like > inventory, reservation, bank account, money transfer and so on. If each of > these are going to get their own media type then we end up with the known > "over proliferation of media types" as opposed to sticking to a few well > known media formats. > > But, to argue against my previous post - this is an e-procurement REST > API - > meaning "this is one specific instance of the REST architecture". It > happens > to work with HTTP which has many more media types, but for this specific > REST instance (e-procurement) we actually don't have that many media > types. > > If this is a valid interpretation of REST as an achitecture, and > e-procurement as an instance of it, then the "over proliferation of media > types" is a non-problem - there won't be that many media types per API / > instance of REST. > > Do I sound scizophrenic? Probably. I feel so. Like running around in > circles. Need sleep ... > > /J�rn > > > > ------------------------------------ > > Yahoo! Groups Links > > >
On Dec 31, 2011, at 2:08 AM, Jørn Wildt wrote: > Duh! It is the examples that are wrong. Media types should not describe objects like "sales order" or "bill". They should be more general like "e-procurement". The fact that its a sales order can be derived from the rel-type (as argued earlier on). > > So my previous little step-be-step list becomes: > > 1) The client follows a link-rel "this-is-the-sales-order" and assumes it will get, well, a sales order - in some yet unknown format. yes > > 2) The client does not need to specify "I want a sales order" in the accept header (it does so by the URL it has selected). > yes - in the sense that it knows that /foo/bar is the URI of the resource it wants to send the request to. The response is self describing, but the client (of course) has a context that drives how it deals with the response. > 3) The client must specify its capabilities (or the context in which it will process the sales order). So it sends Accept: application/e-procurement+xml (not application/salesorder+xml). Yes (except that it does not specify its context. The context is the client's business and irrelevant to HTTP). The core developer question being: What do I put in the Accept header and why exactly. (See the two atom-protocol links I sent previously where my driver was sth like "Why exactly (on what grounds) does a browser say Accept: image/* when following an HTML <img src=""/> element or why does an AtomPub client say Accept: application/atom+xml when following a <collection href=""/> element.) > > 4) The server replies with a sales order and content-type application/e-procurement+xml. yes (and also: maybe it doesn't (aka 406 Not Acceptable) and you have to deal with that in a meaningful way, too - remembering that the body of the 406 response also constitutes a useful application state. And IMHO there is a lot to leverage in M2M systems wrt such error response bodies ) > > 5) The client knows that a sales order in the context of application/e-procurement+xml should be decoded in a certain way and does so. yes - It knows in what context it i and what it wants to do with the response. Maybe it only wants to index the response to build up a search index, maybe it wants to spell-check the thing, maybe it wants to build a report, maybe it wants to check inventory. It depens on its own position in its own application. > > Repeat step 1-5 with "sales order" replaced with "bill/quote/reservation/etc" - but do not change the media type. yes - though the same would be true if you indeed choose to have application/order, application/invoice,... But it certainly helps to think in terms of larger media types because it de-emphazises the 'entities' in the thinking. > > Better now? Sounds good. Jan > > /Jørn > > ----- Original Message ----- From: "Jørn Wildt" <jw@...> > To: "Jan Algermissen" <jan.algermissen@...> > Cc: "REST Discuss" <rest-discuss@yahoogroups.com> > Sent: Saturday, December 31, 2011 1:10 AM > Subject: Re: [rest-discuss] Re: The "new media types are evil" meme > > >> Jan, I am quite sure I understand what you are saying :-) You could even >> argue against me, saying that, well, Jorn, since you have decided to inspect >> the payload before acting on the response (checking the XML, right) - then >> you might as well drop the concept of a media type completely and always >> depend on data-inspection. Something that certainly won't work! >> >> I totally agree that switching on domain specific information / capabilities >> in the media type feels intuitive, makes life easier, and makes HTTP work >> with you instead of against you. But I am having a seriously hard time >> deciding on whether or not to do it - what do I gain and, more interesting, >> what do I loose? >> >> So, if we mint new media types - when and for what purpose should we do >> that? Lets talk about e-procurement again. In this domain we have orders and >> bills. What media types should we have? >> >> Should we have only one media type (like Webber's >> application/vnd.restbucks+xml in >> http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/): >> >> application/e-procurement+xml >> >> or many media types: >> >> application/e-procurement.order+xml >> application/e-procurement.bill+xml >> application/e-procurement.person+xml >> >> As we extend the application we are probably going to need stuff like >> inventory, reservation, bank account, money transfer and so on. If each of >> these are going to get their own media type then we end up with the known >> "over proliferation of media types" as opposed to sticking to a few well >> known media formats. >> >> But, to argue against my previous post - this is an e-procurement REST API - >> meaning "this is one specific instance of the REST architecture". It happens >> to work with HTTP which has many more media types, but for this specific >> REST instance (e-procurement) we actually don't have that many media types. >> >> If this is a valid interpretation of REST as an achitecture, and >> e-procurement as an instance of it, then the "over proliferation of media >> types" is a non-problem - there won't be that many media types per API / >> instance of REST. >> >> Do I sound scizophrenic? Probably. I feel so. Like running around in >> circles. Need sleep ... >> >> /Jørn >> >> >> >> ------------------------------------ >> >> Yahoo! Groups Links >> >> >>
If it's actually doing the job I described, then why is everyone so worried? On Fri, Dec 30, 2011 at 5:00 PM, Jan Algermissen <jan.algermissen@... > wrote: > > On Dec 31, 2011, at 1:33 AM, Glenn Block wrote: > > > It probably needs some type of standards body which reviews media type > submissions and prevents duplication by encouraging folks to use the > existing procurement media type rather than creating their own. Their > analysis would expose gaps in the existing one to help it evolve and meet > the broader needs > > http://tools.ietf.org/html/rfc4288#section-5.4 > > Jan
Yes, media types should be broad covering a domain, not specific covering specific entities or related to objects. Glenn On Fri, Dec 30, 2011 at 5:08 PM, Jørn Wildt <jw@...> wrote: > Duh! It is the examples that are wrong. Media types should not describe > objects like "sales order" or "bill". They should be more general like > "e-procurement". The fact that its a sales order can be derived from the > rel-type (as argued earlier on). > > So my previous little step-be-step list becomes: > > 1) The client follows a link-rel "this-is-the-sales-order" and assumes it > will get, well, a sales order - in some yet unknown format. > > 2) The client does not need to specify "I want a sales order" in the accept > header (it does so by the URL it has selected). > > 3) The client must specify its capabilities (or the context in which it > will > process the sales order). So it sends Accept: application/e-procurement+xml > (not application/salesorder+xml). > > 4) The server replies with a sales order and content-type > application/e-procurement+xml. > > 5) The client knows that a sales order in the context of > application/e-procurement+xml should be decoded in a certain way and does > so. > > Repeat step 1-5 with "sales order" replaced with > "bill/quote/reservation/etc" - but do not change the media type. > > Better now? > > /Jørn > > ----- Original Message ----- > From: "Jørn Wildt" <jw@fjeldgruppen.dk> > To: "Jan Algermissen" <jan.algermissen@...> > Cc: "REST Discuss" <rest-discuss@yahoogroups.com> > Sent: Saturday, December 31, 2011 1:10 AM > Subject: Re: [rest-discuss] Re: The "new media types are evil" meme > > > > Jan, I am quite sure I understand what you are saying :-) You could even > > argue against me, saying that, well, Jorn, since you have decided to > > inspect > > the payload before acting on the response (checking the XML, right) - > then > > you might as well drop the concept of a media type completely and always > > depend on data-inspection. Something that certainly won't work! > > > > I totally agree that switching on domain specific information / > > capabilities > > in the media type feels intuitive, makes life easier, and makes HTTP work > > with you instead of against you. But I am having a seriously hard time > > deciding on whether or not to do it - what do I gain and, more > > interesting, > > what do I loose? > > > > So, if we mint new media types - when and for what purpose should we do > > that? Lets talk about e-procurement again. In this domain we have orders > > and > > bills. What media types should we have? > > > > Should we have only one media type (like Webber's > > application/vnd.restbucks+xml in > > http://iansrobinson.com/2010/09/02/using-typed-links-to-forms/): > > > > application/e-procurement+xml > > > > or many media types: > > > > application/e-procurement.order+xml > > application/e-procurement.bill+xml > > application/e-procurement.person+xml > > > > As we extend the application we are probably going to need stuff like > > inventory, reservation, bank account, money transfer and so on. If each > of > > these are going to get their own media type then we end up with the known > > "over proliferation of media types" as opposed to sticking to a few well > > known media formats. > > > > But, to argue against my previous post - this is an e-procurement REST > > API - > > meaning "this is one specific instance of the REST architecture". It > > happens > > to work with HTTP which has many more media types, but for this specific > > REST instance (e-procurement) we actually don't have that many media > > types. > > > > If this is a valid interpretation of REST as an achitecture, and > > e-procurement as an instance of it, then the "over proliferation of media > > types" is a non-problem - there won't be that many media types per API / > > instance of REST. > > > > Do I sound scizophrenic? Probably. I feel so. Like running around in > > circles. Need sleep ... > > > > /Jørn > > > > > > > > ------------------------------------ > > > > Yahoo! Groups Links > > > > > > > > > > ------------------------------------ > > Yahoo! Groups Links > > > >
On Dec 31, 2011, at 1:20 AM, Jørn Wildt wrote:
> 1) lack of tooling for new media types
You can re-use existing syntaxes (e.g. HTML) and their tooling and define new media types around them (defining a particular subset of the syntax). E.g. you can define that your order look like this:
<html>
<body>
<div class="order">
<ul class="line-items">
<li>...</li>
</ul>
</div>
</body>
</html>
and then do:
GET /orders/10
Accept: application/procurement
200 Ok
Content-Type: application/procurement
<html>
<body>
<div class="order">
<ul class="line-items">
<li>...</li>
</ul>
</div>
</body>
</html>
or (!)
GET /orders/10
Accept: text/html
200 Ok
Content-Type: text/html
<html>
<body>
<div class="order">
<ul class="line-items">
<li>...</li>
</ul>
</div>
</body>
</html>
IOW, minting new media types (which are just names for a set of compatible syntaxes and associated processing rules) does not prevent you from using old syntaxes.
Jan
Exactly. Even if there is growth in media types the likelihood is they will use a limited set of wire formats like xml/xhtml and json, or csv (smile Mike) which existing parsers will support. On Fri, Dec 30, 2011 at 5:51 PM, Jan Algermissen <jan.algermissen@... > wrote: > ** > > > > On Dec 31, 2011, at 1:20 AM, Jørn Wildt wrote: > > > 1) lack of tooling for new media types > > You can re-use existing syntaxes (e.g. HTML) and their tooling and define > new media types around them (defining a particular subset of the syntax). > E.g. you can define that your order look like this: > > <html> > <body> > <div class="order"> > <ul class="line-items"> > <li>...</li> > </ul> > </div> > </body> > </html> > > and then do: > > GET /orders/10 > Accept: application/procurement > > 200 Ok > Content-Type: application/procurement > > <html> > <body> > <div class="order"> > <ul class="line-items"> > <li>...</li> > </ul> > </div> > </body> > </html> > > or (!) > > GET /orders/10 > Accept: text/html > > 200 Ok > Content-Type: text/html > > <html> > <body> > <div class="order"> > <ul class="line-items"> > <li>...</li> > </ul> > </div> > </body> > </html> > > IOW, minting new media types (which are just names for a set of compatible > syntaxes and associated processing rules) does not prevent you from using > old syntaxes. > > Jan > > >
Okay, having settled this I will go back to Eric's arguments (and the poor cats[1]) that I believe started the "new media types are evil" meme. - Why should we mint a new media type when existing ones 1) can do the same, 2) have better visibility, and 3) do not require their consumers to learn a new media type? The best example is using XHTML (application/xhtml+xml). The upside of this is: 1) It can do just about the same as most of the suggested XML derivations. It has hypermedia controls and using RDFa you can embed any data in it. 2) Visibility: XHTML allows you to document your API using HTML exactly where it is used - in band - in the API - and thus making it more visible. 3) Its a well known media type. No need to learn a new media type. 4) You can do content/capabilties negotionation using the "profile" parameter of the mediatype. For instance "Accept: application/xhtml+xml;profile=e-procurement". See section 8 of http://www.ietf.org/rfc/rfc3236.txt This should be equivalent to application/e-procurement+xml albeit with XHTML instead of XML. The downside is: 1) It cannot do everything - like for instance telling the client to use PUT/DELETE in forms or use URL templates. So it *does* have a more restricted set of capabilities than a custom made media type. 2) The client must look into the returned XHTML for something similar to the profile parameter (for instance the doctype). As per http://www.ietf.org/rfc/rfc3236.txt it says "It is intended to be used only during content negotiation. It is not expected that it be used to deliver content". So it requires a bit of tunneling as you call it. 3) Visibility: the API documentation may be visible but the capabilities are not visible in the returned data (see above). 4) Well known media type: yes, it is well known, but my client still needs to learn how to extract the M2M payload embedded in the HTML. So I don't gain that much from this. Hopefully I didn't get the original arguments completely wrong. So there is a lot that can be done with existing media types like XHTML (and RDF). Personally I like the benefits you get in terms of a self documenting API with HTML directly embedded in it. On the flip-side I really dislike the tunneling aspect of it. But if I want documentation directly in the API I could as well use link headers to embed links to the API documentation - it would although be a bit hidden. I have also had success with providing an XSLT link in the XML such that browsers will render the raw XML as nicely formated HTML. All in all speaking in favor of new media types. /J�rn [1] http://tech.groups.yahoo.com/group/rest-discuss/message/18126 : "each time a dev spits out custom XML or JSON serializations of internal objects, a "new media type" is born (and i think someone kicks a cat, too)".
On Dec 31, 2011, at 9:17 AM, Jørn Wildt wrote: > > So there is a lot that can be done with existing media types like XHTML (and RDF). Huh? Back to square one or what? You cannot send an order in application/xhtml+xml without moving the negotiation issue to a next layer. > Personally I like the benefits you get in terms of a self documenting API with HTML directly embedded in it. What do you mean? The API is already documented in the HTTP spec. > On the flip-side I really dislike the tunneling aspect of it. > > But if I want documentation directly in the API I could as well use link headers to embed links to the API documentation - What is this documentation you are talking about? There is nothing to document about a certain server. Jan > it would although be a bit hidden. I have also had success with providing an XSLT link in the XML such that browsers will render the raw XML as nicely formated HTML. All in all speaking in favor of new media types. > > /Jørn > > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/18126 : "each time a dev spits out custom XML or JSON serializations of internal objects, a "new media type" is born (and i think someone kicks a cat, too)". >
On Dec 31, 2011, at 2:53 AM, Glenn Block wrote: > Exactly. Even if there is growth in media types the likelihood is they will use a limited set of wire formats like xml/xhtml and json, or csv (smile Mike) which existing parsers will support. And even if they do not re-use existing schemas, the task of writing a serializer/deserializer is the least of the challenges you have when realizing a networked system ;-) Jan > > > On Fri, Dec 30, 2011 at 5:51 PM, Jan Algermissen <jan.algermissen@...> wrote: > > > > On Dec 31, 2011, at 1:20 AM, Jørn Wildt wrote: > > > 1) lack of tooling for new media types > > You can re-use existing syntaxes (e.g. HTML) and their tooling and define new media types around them (defining a particular subset of the syntax). E.g. you can define that your order look like this: > > <html> > <body> > <div class="order"> > <ul class="line-items"> > <li>...</li> > </ul> > </div> > </body> > </html> > > and then do: > > GET /orders/10 > Accept: application/procurement > > 200 Ok > Content-Type: application/procurement > > <html> > <body> > <div class="order"> > <ul class="line-items"> > <li>...</li> > </ul> > </div> > </body> > </html> > > or (!) > > GET /orders/10 > Accept: text/html > > 200 Ok > Content-Type: text/html > > <html> > <body> > <div class="order"> > <ul class="line-items"> > <li>...</li> > </ul> > </div> > </body> > </html> > > IOW, minting new media types (which are just names for a set of compatible syntaxes and associated processing rules) does not prevent you from using old syntaxes. > > Jan > > > > >
> Huh? Back to square one or what? Not really. I was trying to fininsh what this thread started with - summing up the arguments for and against new media types. It was an attempt at a summary - with a conclusion in favor of minting new media types. Sorry if that was not clear. /J�rn ----- Original Message ----- From: Jan Algermissen To: J�rn Wildt Cc: REST Discuss Sent: Saturday, December 31, 2011 9:44 AM Subject: Re: [rest-discuss] Re: The "new media types are evil" meme On Dec 31, 2011, at 9:17 AM, J�rn Wildt wrote: > > So there is a lot that can be done with existing media types like XHTML > (and RDF). Huh? Back to square one or what? You cannot send an order in application/xhtml+xml without moving the negotiation issue to a next layer. > Personally I like the benefits you get in terms of a self documenting API > with HTML directly embedded in it. What do you mean? The API is already documented in the HTTP spec. > On the flip-side I really dislike the tunneling aspect of it. > > But if I want documentation directly in the API I could as well use link > headers to embed links to the API documentation - What is this documentation you are talking about? There is nothing to document about a certain server. Jan > it would although be a bit hidden. I have also had success with providing > an XSLT link in the XML such that browsers will render the raw XML as > nicely formated HTML. All in all speaking in favor of new media types. > > /J�rn > > [1] http://tech.groups.yahoo.com/group/rest-discuss/message/18126 : "each > time a dev spits out custom XML or JSON serializations of internal > objects, a "new media type" is born (and i think someone kicks a cat, > too)". >
On Fri, Dec 30, 2011 at 11:32 PM, Jan Algermissen < jan.algermissen@...> wrote: > ** > > > > On Dec 31, 2011, at 12:22 AM, Glenn Block wrote: > > > Personally, if there was no concern over minting new types, I would opt > the minting model. It allows a very simple model for clients and servers to > negotiate what they want without adding other complexities. > > > > It seems like however there is a big concern over the minting of new > types which is why this conversation is even happening. > > Fair enough - but what exactly is that concern? > > My impression is that the origin (of the concern) is a lack of > understanding of REST and I try to help make people understand in order to > solve the concern. That is always better IMHO than to change the > architecture to match a lack of understanding (which is the very reason why > SOAP exists, for example :-). > This discussion was meant to be about the *design considerations* related to media type strategy, not about whether those strategies count as 'Doing REST'. They are all 'valid' approaches from a REST pov, and unless you can find some quote from the dissertation that says otherwise please stop asserting otherwise - it detracts from the actual conversation we should be having. You haven't responded to a few of my posts; please could you respond to them as it should help us establish exactly where you are coming from and prevent us from going round in circles: http://tech.dir.groups.yahoo.com/group/rest-discuss/message/18236 http://tech.dir.groups.yahoo.com/group/rest-discuss/message/18255 http://tech.dir.groups.yahoo.com/group/rest-discuss/message/18249 Cheers, Mike