The events coming in from the webhook interface do not seem to have a consistency one would like. Some of it is a bit comical, in its own way, with events taking place that theoretically couldn’t yet have taken place…
A very simple example of a scheduled run that had four zones in it. I’ve stripped away the date portion of the timestamp just to make it a bit easier to view:
timestamp event type startTime endTime
18:00:00Z SCHEDULE_STARTED_EVENT 18:00:02.169Z 18:02:05.169Z
18:00:01Z DEVICE_ZONE_RUN_STARTED_EVENT 18:00:01.216Z 18:00:03.216Z zone 1
18:00:03Z DEVICE_ZONE_RUN_STARTED_EVENT 18:00:03.309Z 18:00:25.309Z zone 2
18:00:13Z DEVICE_ZONE_RUN_COMPLETED_EVENT 18:00:01.808Z 18:00:03.808Z zone 1
18:00:16Z DEVICE_ZONE_RUN_STARTED_EVENT 18:00:14.907Z 18:01:00.907Z zone 3
18:00:25Z DEVICE_ZONE_RUN_COMPLETED_EVENT 18:00:04.405Z 18:00:26.405Z zone 2
18:00:51Z DEVICE_ZONE_RUN_STARTED_EVENT 18:00:51.712Z 18:02:04.712Z zone 4
18:01:01Z DEVICE_ZONE_RUN_COMPLETED_EVENT 18:00:16:210Z 18:01:02.210Z zone 3
18:02:04Z DEVICE_ZONE_RUN_COMPLETED_EVENT 18:00:52.224Z 18:02:05.224Z zone 4
18:02:05Z SCHEDULE_COMPLETED_EVENT 18:00:01.236Z 18:02:04.236Z
There is no property by property documentation available for the various webhook events, so one is left attempting to reverse engineer the individual thoughts of each individual working on the event set - it does come off as though more than one person was doing the work.
It may not seem hugely important, but it's a bit amusing that the start time of the first zone precedes the schedule starting. It's maybe equally amusing to find that strat events have endTime values on them.
The question at this point would really be: what is the logic by which each of the timestamps (even the event timestamps) are generated and put into the data? What is the rationale behind how each start and end time is determined, and are the dateTime values just wishful thinking, best guesses, or is there an actual methodology being employed? A schedule that is operating sequentially really shouldn't have this behavior, and while the event timestamps bear a sequence, the rest of it leaves a lot of guesswork involved.
It would be good for there to be a documented methodology for the event data.
It would be good for there to be consistency at the property/field level for each event as far as source and meaning
It would be good for a sequence of events to make sense as to why they fall in the sequence in which they do
The timestamps you show are all Zulu time (that is what the terminating Z means). As in… UTC. Depending on your location and time of year, that is one of two options of hours away. Currently EDT on the east coast is 4 hours behind, with PDT (Pacific Daylight Time) on the west coast 7 hours behind Zulu/UTC.
That changes during winter when it goes to 5 hours behind for EST (Eastern Standard Time) and 8 hours for PST.
For more info, Zulu is the military name for UTC - most software systems use UTC because it does not suffer from daylight savings shifts twice a year (once a year 2:30am exists twice in the day, sorting them is hell) and is a good common ground for working with multiple timezones. Whatever integration you are working with, it may provide timezone translation functions.
Oh and a wild guess on why SCHEDULE_STARTED_EVENT is AFTER the DEVICE_ZONE_RUN_STARTED_EVENT…
I would assume the DEVICE event is produced by the controller itself, sent to the cloud, with the cloud marking the schedule started based on it at a slightly later time due to network lag. That or time sync issues where controller runs at slightly different time than cloud. But I’d put my money on the former. In both cases my educated guess is that the DEVICE prefixed event is generated and timestamped by the controller, while the other is generated and timestamped by a cloud service.
I’ve also noticed discontinuity in timestamps throughout the API. Some are pre-formatted to human readable time and some are in Unix milliseconds with a timezone offset in milliseconds. I would rather see everything in Unix epoch for uniformity and let me (a developer) structure the datestamp with struct time.
To reformat a timestamp that is zulu time to how I want it formatted requires split, replace, or even a regex. All of that can be avoided if it was in Unix time like the rest of the API.
Thankfully the API includes my timezone preference and correct offset allowing me to join them to create my correct local time. Whoever works on the API, thank you for including the TZ offset, that definitely helps.
Here’s a good example. The response header is in human readable time but not in a format that is easy to rearrange. It’s the only response that returns the time of the request.
Because the response is in GMT there’s no easy way to reformat it for a timezone offset. That’s a bit frustrating. I would have to deconstruct the date output, slice it up, and reformat it, an arduous and unnecessary process when all other timestamps in the main data response are in unix time.
Here are some examples from data responses that are all in unix time and there are many per zone in scheduleRules.
‘startDate’: 1718596800000
lastWateredDate’: 1719039601000
‘createDate’: 1718677323000
I would much rather have the Date in Unix time or another response header specifically for unix time. That way I can display the time of the latest API polling request on the touchscreen display and in the struct format that I choose.
I mean even within the response headers there’s discontinuity of timestamp formatting between Date: and Limit Reset:.
Well, the Date header is following a standard set in RFC2616 section 3.3 - which calls for that exact format and GMT.
HTTP applications have historically allowed three different formats for the representation of date/time stamps: Sun, 06 Nov 1994 08:49:37 GMT ; RFC 822, updated by RFC 1123 Sunday, 06-Nov-94 08:49:37 GMT ; RFC 850, obsoleted by RFC 1036 Sun Nov 6 08:49:37 1994 ; ANSI C’s asctime() format The first format is preferred as an Internet standard and represents a fixed-length subset of that defined by RFC 1123 [8] (an update to RFC 822 [9]). The second format is in common use, but is based on the obsolete RFC 850 [12] date format and lacks a four-digit year. HTTP/1.1 clients and servers that parse the date value MUST accept all three formats (for compatibility with HTTP/1.0), though they MUST only generate the RFC 1123 format for representing HTTP-date values in header fields. See section 19.3 for further information. Note: Recipients of date values are encouraged to be robust in accepting date values that may have been sent by non-HTTP applications, as is sometimes the case when retrieving or posting messages via proxies/gateways to SMTP or NNTP. Fielding, et al. Standards Track [Page 20]
RFC 2616 HTTP/1.1 June 1999 All HTTP date/time stamps MUST be represented in Greenwich Mean Time (GMT), without exception. For the purposes of HTTP, GMT is exactly equal to UTC (Coordinated Universal Time). This is indicated in the first two formats by the inclusion of “GMT” as the three-letter abbreviation for time zone, and MUST be assumed when reading the asctime format. HTTP-date is case sensitive and MUST NOT include additional LWS beyond that specifically included as SP in the grammar.
I suspect that if someone were to really sit down and try to document the entire API, they’d start coming across the inconsistencies and realize there’s work to be done to make this whole thing cohesive and consumable.
I suspect even further that if they were to try to write something meaningful on their own they’d realize these little bits of cumbersomeness that really don’t need to be there.There is something to be said about taking someone else on another team and asking them to write something useful against your new API and then seeing the issues from a fresh set of eyes.
I know they had an intent to add the Hose Timer devices to the webhooks API. If they were to do so, my hope would be that they clean all this up in advance or at the same time, rather than just make it worse and ultimately unfixable.
It wasn’t terribly hard to address the event order issues - it just meant you had to do a two-pass approach to things. That’s the kind of thing, however, that makes you realize this wasn’t fully planned as much as some of it just happened. I’m sure there were more important things to do, but things like APIs tend to live on beyond what was anticipated. It takes real effort to design an API and an event model that retains a level of elegance and conveys being well thought out.
The problem with fixing these is they are breaking changes. These call for an API version roll up, which is not impossible, just adds to the effort, especially if the API was not designed as versioned from the get go.
I am all for consistent patterns across everything, API, code, test frameworks, etc. Consistent is good. Consistent is the best.
I figured the Date was done for something like that reason. Still can add a unix date into the header to make it easier. The rate limiting endpoints in the header are very useful just wish there was a unix date in there too. They don’t have to break the existing date format just add a new unix one to go with it.
I’m quite new here so I don’t know how long the API has needed work or the hose timer API because I only have the 4 zone controller. Sounds like they need to revisit the API especially after rolling out the new valve monitoring feature.