Geo-Aware Processing Flags: Toggling Heavy GIS Workloads Between Edge, Cloud, and PaaS
gisedgefeature-flags

Geo-Aware Processing Flags: Toggling Heavy GIS Workloads Between Edge, Cloud, and PaaS

DDaniel Mercer
2026-04-12
24 min read
Advertisement

A deep dive into geo-aware flags that route GIS workloads across edge, cloud, and PaaS based on latency, cost, and compliance.

Geo-Aware Processing Flags: Toggling Heavy GIS Workloads Between Edge, Cloud, and PaaS

Modern cloud GIS platforms are no longer just storage and map rendering layers. They are distributed compute systems that must decide, job by job, where spatial processing should run: on an edge node near the sensor, in a hyperscale cloud, or inside a managed PaaS environment. That decision has real consequences for latency, cost, regulatory exposure, and operational reliability. In practice, the best architecture is often not “edge vs cloud” as a religious choice, but a policy-driven routing model powered by geo-aware flags that can move workloads to the right processing tier in real time. If you are designing for satellite imagery, real-time geoprocessing, or compliance-sensitive spatial workflows, this guide shows how to build that system deliberately instead of guessing under pressure.

The opportunity is growing quickly. Cloud GIS adoption is expanding because organizations want scalable, real-time spatial analytics, lower operational overhead, and collaboration across teams, while edge capabilities are becoming more viable due to 5G, better local compute, and regulatory pressure for data locality. This mirrors the broader shift in modern systems toward policy-based routing and control planes, a theme also seen in our guide on regulatory readiness for CDS and the operational lessons from data governance in marketing. In GIS, the same pattern applies: route the job based on its data characteristics, not just where the team happens to be sitting.

1. Why Geo-Aware Flags Matter for GIS Architecture

From static deployment decisions to per-job routing

Traditional GIS architecture assumes a fixed home for compute. Batch raster processing goes to the cloud, telemetry preprocessing runs on-prem, and product teams live with whatever latency and bill arrive at the end of the month. That model breaks when your workloads vary widely by geography, freshness, and regulatory zone. A single architecture may need to serve wildfire imagery, ship-tracking analytics, roadside sensor fusion, and cross-border land-use mapping in the same day. Geo-aware flags let you treat processing location as a runtime decision rather than a permanent deployment constraint.

That runtime decision is especially useful when jobs differ in value density. A drone image tile covering a wildfire perimeter may need rapid edge triage to flag active hotspots before it is worth sending to the cloud for deeper model inference. A nationwide land-cover classification run, by contrast, may be cheaper and easier to scale in cloud GPUs. This is similar to how teams use adaptive routing in other systems, like the lifecycle control patterns described in fleet telematics forecasting or the routing choices discussed in local regulation scheduling. The best system is not rigid; it is decision-aware.

The hidden cost of placing every workload in one place

Sending all GIS jobs to the cloud looks simple until the first bill arrives or the first compliance review lands. Satellite imagery can be enormous, and repeatedly moving raw tiles across regions often costs more than the compute itself. Edge processing can reduce bandwidth and latency, but it also increases the complexity of fleet management, version control, and observability. PaaS can accelerate delivery by abstracting infrastructure, but managed platforms still need policy hooks so your team can decide when a job should stay local versus when it should burst to shared infrastructure.

The tradeoff is analogous to the “centralization vs specialization” problem seen in other technical workflows. Our guide on local AI processing for home security illustrates why local compute is often chosen when latency and privacy dominate, while the article on internal AI agent triage shows how governed automation is needed when sensitive data moves through multiple layers. GIS teams need the same discipline, but with geospatial constraints such as coordinate systems, tile pyramids, and jurisdictional boundaries.

Market forces are pushing GIS toward distributed compute

The cloud GIS market is growing rapidly, driven by demand for scalable spatial analytics and the ingestion of large volumes of satellite imagery, IoT streams, and crowd-sourced geo data. That growth is not just a vendor story; it reflects an architectural reality. Geospatial workloads are increasingly event-driven and real-time, which means the compute location should align with where the data appears and where the decision must be made. As cloud-native geospatial stacks mature, the winning teams will be the ones that can direct each workload to edge, cloud, or PaaS with measurable rules.

Pro Tip: Do not design “edge-first” or “cloud-first” GIS. Design “policy-first” GIS. The policy should choose the environment based on latency, cost, compliance, and data volume, then route the job to the best execution tier.

2. The Three Execution Tiers: Edge, Cloud, and PaaS

Edge processing: best for immediacy and data minimization

Edge processing shines when the job must happen near the source. Think roadside cameras detecting flooding, oil-and-gas sensors detecting leakage, or agricultural drones flagging anomalies before upload. In these cases, sending raw data to the cloud may be too slow, too expensive, or too risky from a privacy perspective. Edge nodes can run lightweight inference, tile clipping, coordinate transformation, and first-pass filtering so only valuable outputs travel upstream.

But edge is not free. It introduces distributed operations, patching complexity, hardware variability, and the need for resilient offline behavior. If your edge fleet spans thousands of devices, you must plan for staged rollout, observability, and rollback. The same release-management ideas that help software teams in compatibility testing matrices and GIS freelance workflows are useful here: define supported versions, define capability tiers, and avoid assuming every node is identical.

Cloud processing: best for elasticity, heavy analytics, and shared governance

Cloud execution is ideal for jobs that need burst capacity, large memory footprints, or expensive GPU/CPU scaling. Examples include orthomosaic generation, large-area raster analysis, multi-temporal change detection, and model retraining over archived satellite imagery. Cloud GIS also simplifies cross-team collaboration because data, logs, and outputs are centrally available. When a job is compute-heavy but not ultra-latency-sensitive, the cloud is usually the right default.

Still, cloud is not always the cheapest choice. If you are repeatedly moving raw data into a cloud region to process only a small subset, the transport and storage costs can dominate. And if the data must remain within a jurisdiction, cloud routing must be constrained by policy. That is why a cloud router should understand more than urgency; it should understand data class, payload size, and geography. Teams in regulated domains should read AI regulation guidance for developers and compliance checklist patterns to design guardrails before scaling too fast.

PaaS: best for fast delivery with opinionated controls

PaaS sits between raw infrastructure and fully custom cloud deployments. For GIS teams, this can mean managed geospatial APIs, managed event streaming, serverless geoprocessing functions, or hosted workflow engines. PaaS reduces operational burden and accelerates iteration, especially for standard tasks like geocoding, route optimization, or spatial joins at moderate scale. It is often the easiest way to standardize job submission while still allowing policy-driven placement behind the scenes.

The tradeoff is that PaaS often imposes boundaries. You may have less direct control over kernel tuning, ephemeral storage, custom drivers, or GPU scheduling. That is acceptable if your goal is product velocity and governance, but you should be explicit about what can and cannot be placed there. For teams evaluating managed systems versus custom stacks, the logic resembles the choices discussed in managed scaling strategies and governed AI visibility.

3. The Geo-Aware Flag Model: How to Route Work by Policy

What a geo-aware flag actually controls

A geo-aware flag is not simply a boolean like use_edge=true. In a mature system, it is a policy object that can encode processing location, fallback order, jurisdiction rules, priority, and job-level thresholds. For example, the flag can say: “If the source is within EU boundaries and the payload contains personal location data, process locally or in an EU region; if tile size exceeds 2 GB and the job is not latency critical, route to cloud batch; if sensor freshness is under 5 seconds, run edge inference first.” This makes the flag a control-plane input, not a presentation-layer toggle.

That distinction matters because GIS jobs are often heterogeneous. A single pipeline can include preprocessing, detection, vectorization, enrichment, and export. Each stage may have a different optimal location. In practice, geo-aware flags should be job-scoped and stage-scoped. The route chosen for “ingest” may differ from the route chosen for “analysis.” This is the same kind of decomposition that makes product and content systems easier to manage, as shown in roadmap planning and digital asset thinking for documents.

Policy inputs: latency, cost, compliance, and data gravity

The routing engine should evaluate at least four inputs. First is latency: how quickly does the job need to finish, and how much data can tolerate a round trip? Second is cost: what does compute, egress, storage, and orchestration cost across edge, cloud, and PaaS? Third is compliance: does the data contain sensitive coordinates, critical infrastructure data, or regulated imagery that must remain in a specific region? Fourth is data gravity: is the dataset already local to a site, or would moving it create unnecessary overhead?

These inputs can be combined into a weighted score or a set of hard constraints. Hard constraints should always win. If law or contract says data must stay local, cost optimization cannot override it. If a sensor workflow requires sub-second response, cloud batch is disqualified no matter how cheap it is. For teams building rules engines, the governance mindset from regulated workflows and the operational caution in temporary regulatory changes can prevent expensive mistakes.

A practical decision matrix for GIS job routing

Job TypeBest Default LocationWhyWatch OutsFlag Hint
Real-time flood sensor alertingEdgeSub-second response and local resilienceHardware drift, offline syncroute=edge
Satellite imagery tile classificationCloudGPU scale and large batch throughputEgress cost, queue delaysroute=cloud
Cross-border parcel analyticsPaaS/EU regionManaged control with residency constraintsRegion lock-inregion=eu_only
Drone emergency triageEdge first, cloud secondImmediate local filtering, then deep analysisDuplicate pipelinesroute=edge_then_cloud
Urban mobility heatmapsCloudAggregation and collaboration across teamsLatency not critical, but freshness mattersfreshness=hourly

This kind of matrix becomes much more useful when paired with operational telemetry. If you can measure queue depth, CPU saturation, egress volume, and job latency by region, the flag system can adapt as conditions change. The reporting patterns from local SEO and the comparison frameworks in visual comparison templates are surprisingly relevant: the right decision depends on making location-specific differences visible.

4. Satellite Imagery: Where Geo-Aware Routing Pays Off Fastest

Preprocessing at the edge to reduce transfer volume

Satellite imagery pipelines often begin with massive raw scenes, but not every pixel deserves cloud transit. A smart edge step can perform cloud masking, AOI clipping, basic geo-referencing checks, and simple anomaly scoring before the dataset is uploaded. This can cut transfer costs dramatically when only a small portion of the scene is actionable. For example, an agriculture platform may only need tiles intersecting fields of interest, while the rest of the image is irrelevant for the current operation.

This pattern is especially powerful in remote areas where bandwidth is constrained. If a satellite downlink feeds a regional edge hub, the hub can compress, deduplicate, and prioritize imagery before forwarding it. That approach improves time to insight and makes the downstream cloud job smaller and more predictable. It is a similar optimization mindset to the one used in privacy-first local AI, where initial processing reduces unnecessary upstream data movement.

Heavy model inference in cloud or PaaS

After edge filtering, the cloud is often the right place for heavy inference. Deep segmentation, change detection, building footprint extraction, and land-cover classification may require GPUs or memory-intensive workloads that are difficult to maintain locally. PaaS can be attractive here if it provides managed model serving, queueing, and region control. A geo-aware flag can send low-risk jobs to a managed inference endpoint while keeping sensitive or unusually large jobs in a private cloud VPC.

A practical architecture is a two-stage pipeline: edge performs the first pass, then a policy engine routes the surviving workload to a cloud ML service or PaaS job runner. If the image contains regulated assets, the flag can force the job into a restricted region. If the image is non-sensitive and the queue is long, the flag can route to cheaper batch capacity. That flexibility is the operational difference between “using cloud GIS” and “operating a geospatial control plane.”

Example: wildfire response imagery

Consider a wildfire response system ingesting drone and satellite imagery every few minutes. Edge nodes near the incident command center run immediate hotspot detection and generate simplified alert polygons. Those polygons are pushed to cloud GIS for broader map visualization, historical comparison, and coordination with neighboring jurisdictions. If a later batch includes high-resolution thermal imagery, the flag can route it to a more expensive cloud GPU cluster because the deeper analysis is worth the cost. The key is that the routing decision follows mission priority, not a hardcoded deployment assumption.

Pro Tip: In satellite workflows, route by artifact, not by source. Raw scenes, clipped tiles, polygons, previews, and training data often belong in different execution tiers.

5. Real-Time Sensor Streams: Geo-Aware Flags for Live Geoprocessing

Why streaming GIS is different from batch GIS

Real-time geoprocessing changes the rules. A sensor stream from traffic intersections, utility meters, or environmental stations is less about one big job and more about a continuous decision loop. The system must ingest, normalize, detect anomalies, enrich location context, and sometimes alert immediately. Edge processing helps because it reduces round-trip delay and can keep operating when the network degrades. Cloud processing still matters for aggregation, trend analysis, and model retraining, but it should not be the only place where decision-making happens.

This architecture is especially useful when each sensor has different legal or operational constraints. For example, critical infrastructure telemetry may need local processing before any data leaves the site. Public air-quality streams may be safe to send directly to cloud analytics. Geo-aware flags let you apply different paths based on sensor category, location, and alert severity. The pattern resembles the controlled response workflows in cyber defense triage and the compliance-aware scheduling logic in business scheduling.

Latency routing for operational alerts

Latency routing should be explicit. If an alert must reach operators in less than one second, the processing chain cannot depend on remote batch jobs or slow cold starts. A geo-aware flag can route the stream through a lightweight edge function that validates coordinates, applies thresholds, and emits the alert. The cloud can then perform enrichment and storage after the critical decision has already been made. This ensures that operators do not wait for pretty dashboards when a fast warning is what matters.

For example, a flood sensor on a rural bridge can report rising water levels to a local edge gateway. The gateway checks the rate of change and immediately triggers a municipal alert if thresholds are crossed. A parallel cloud process accumulates the time series, runs seasonal models, and updates historical risk maps. This split-brain design is common in resilient systems, similar to the backup thinking described in flexible trip planning and the fallback mindset in mileage rebooking strategies.

Example: city sensor mesh and pollution alerts

Imagine a city-wide pollution monitoring mesh with hundreds of low-cost sensors. The edge gateway bins readings, removes obvious outliers, and detects localized spikes near traffic corridors. If readings stay normal, only hourly aggregates are sent to cloud GIS. If a spike crosses a health threshold, the geo-aware flag upgrades the path and sends raw high-frequency data plus a geofence to cloud PaaS for incident analysis. That means the system pays for expensive processing only when the value of the investigation justifies it.

This cost-sensitive escalation pattern is powerful because it prevents unnecessary cloud spend without sacrificing responsiveness. It is also easier to audit. Operators can later inspect why a job was routed to edge or cloud, which is essential when multiple departments share a geospatial platform. The auditability lesson is similar to what compliance teams face in regulatory readiness and what data teams need for trustworthy governance in AI visibility.

6. Cost Tradeoffs: Build a Routing Policy That Actually Saves Money

Model compute, egress, storage, and orchestration together

Most GIS cost mistakes happen because teams only compare compute prices. In reality, spatial workloads include egress, object storage, queue time, orchestration overhead, and repeated transformation costs. A job that appears cheap in cloud compute may become expensive when raw imagery is uploaded, normalized, re-tiled, and replicated across regions. Edge processing often reduces egress and storage, but may increase the cost of maintenance and rollout. PaaS can cut operations overhead but may be more expensive per unit of compute than self-managed instances.

To make geo-aware flags financially useful, define a routing formula that estimates end-to-end cost. At minimum, include expected data volume, expected output volume, compute duration, region-specific egress, and retry probability. Then compare the score against latency and compliance constraints. Teams that want to go deeper can borrow the cost-thinking approach from cost reduction without compromise and forecasting failure analysis, where incomplete models lead to bad decisions.

Use thresholds, not vague preferences

A routing policy is much easier to maintain when it uses thresholds. For example: if input size is under 200 MB and latency is under 2 seconds, use edge; if input size exceeds 2 GB or GPU inference is required, use cloud; if the job is standardized and region-bound, use PaaS; if jurisdiction prohibits cross-border transfer, use local or regional-only execution. These rules can be encoded in a config service and versioned like code. That way, business stakeholders can approve policy changes without modifying runtime code paths.

Thresholds also make financial reviews easier. Finance teams can see why a job moved tiers and what impact it had. Ops teams can track whether edge usage is justified by reduced transfer fees. Product teams can understand when a feature toggle is spending too much on convenience. The same discipline is useful in other decision-heavy systems, such as cloud gaming tradeoffs and battery and power tradeoffs, where performance and economics must be balanced.

Watch for toggle debt in your routing layer

Geo-aware flags can create their own debt if they multiply without ownership. Every new region, sensor class, or imagery product line can spawn another rule, and suddenly no one knows which path a job will take. To avoid this, give each flag an owner, expiration criteria, and a review cadence. Separate permanent policy from temporary experiments. If a routing rule exists only to test an edge inference path, it should not become a forever flag.

That governance pattern is consistent with best practices in controlled rollout systems and is closely related to the content operations lessons in roadmap management and asset lifecycle management. The principle is simple: every flag should have a reason to exist, a measurable effect, and a planned retirement.

7. Regulatory and Data Residency Constraints

For many geospatial applications, processing location is constrained by law, contract, or policy. Government mapping data, critical infrastructure telemetry, medical-adjacent location trails, and sensitive border or defense imagery may need residency guarantees. Geo-aware flags are useful because they let you encode those constraints as routing rules rather than leaving them to developer memory. The goal is not merely to be fast; it is to be compliant by design.

When regulation is temporary or region-specific, flags are especially valuable. A policy may change for a limited period, require special handling in a specific country, or prohibit some classes of data movement. If your routing logic is centralized and versioned, you can update it quickly and preserve an audit trail. For a deeper model of how temporary rules affect workflows, see temporary regulatory change handling and compliance checklists for data systems.

Auditability and explainability matter to GIS leaders

Every routing decision should be explainable after the fact. If a job went to cloud instead of edge, the system should record whether it was because of a size threshold, lack of local capacity, a residency requirement, or a failover event. This is essential for security reviews, financial audits, and customer trust. It also helps during incident response, when operators need to know whether a job was rerouted because an edge node failed or because the policy intentionally changed.

Good audit trails do more than satisfy compliance. They help teams optimize. If you can see that 80% of jobs routed to cloud because the edge nodes were under-provisioned, you can invest in the right bottleneck instead of guessing. If you can see that a region-bound PaaS path consistently costs more but cuts latency in half, you can make an informed decision about whether the premium is worth it. This is the same logic that makes visibility so valuable in data governance.

Design for policy drift and regional exceptions

As geospatial systems expand, policy drift becomes inevitable. A new region launches, a new sensor category appears, or a customer negotiates special residency terms. Rather than hardcoding exceptions, maintain a policy registry with versioned rule sets and metadata about who approved them. That structure allows your GIS platform to remain flexible without becoming unmanageable. It also reduces the risk that one-off exceptions silently become system defaults.

Teams that ignore this eventually create brittle systems where only one engineer understands the routing logic. To prevent that, keep policy definitions close to the platform, but separate from service code. This is a disciplined operating model similar to the governance patterns in digital asset thinking and the operational controls in regulatory readiness.

8. Reference Architecture and Implementation Pattern

Use a central policy engine with distributed execution points

The cleanest pattern is a central policy engine paired with distributed execution points. The policy engine evaluates job metadata: source location, payload size, SLA, data classification, and required transformations. It returns a route decision such as edge, cloud, regional PaaS, or hybrid. Execution points in each environment consume the same job descriptor and emit standardized telemetry. That keeps the system consistent even when the actual compute layer differs.

In practice, you can implement this with a lightweight service that tags jobs before they are enqueued. The tag travels with the job through the pipeline and determines where subsequent stages run. This is especially useful in satellite workflows where preprocessing, inference, and archival may live in different tiers. It also maps well to event-driven systems, where the same message may trigger different handlers depending on geography. If you are designing a broader automation stack, the lessons from pipeline orchestration and scaling managed platforms are directly relevant.

Standardize job contracts and telemetry

Every geo-aware job should carry a contract with the same fields: job type, data class, source region, destination region, latency target, estimated size, and fallback path. Standardized contracts make it possible to route uniformly across tools and teams. They also make it easier to measure whether your flags are actually saving money or reducing latency. Without that common schema, each team will invent its own labels and the routing layer becomes impossible to analyze.

Telemetry should include the original policy decision and the actual runtime outcome. If a job was supposed to execute at the edge but fell back to cloud because the edge node was unavailable, that should be visible. If a cloud job exceeded budget because of retries, that should be visible too. Operational transparency is what turns geo-aware flags from a clever idea into a durable system.

Adopt gradual rollout with safety nets

Do not switch all GIS jobs to geo-aware routing at once. Start with one workload class, such as satellite tile clipping or sensor anomaly detection, and route a small percentage through the new policy engine. Compare latency, cost, and error rates against the baseline. Keep a rollback path that can force jobs back to their previous location if quality degrades. The same risk-managed approach shows up in compatibility automation and backup planning: safe change requires tested reversibility.

Pro Tip: Treat routing policy changes like production releases. Version them, test them, canary them, and make rollback instant.

9. Common Failure Modes and How to Avoid Them

Overusing flags until no one trusts the routes

The biggest failure mode is flag sprawl. When every team creates its own routing exception, you lose the ability to reason about the system. Jobs stop being predictable, and operators begin to bypass the policy engine manually. To prevent this, centralize rule ownership and document every exception with a sunset date. A geo-aware system should reduce ambiguity, not add it.

Another problem is mixing experimentation with compliance. Temporary A/B routing tests are fine for non-sensitive workloads, but they should never override residency or privacy constraints. Keep experimental and mandatory policies separate. This distinction is critical in regulated environments and is one reason why the compliance discipline in AI regulation is so useful.

Ignoring local capacity and failing open safely

Edge can fail when the local node is saturated, disconnected, or underpowered. If your policy engine does not know how to degrade gracefully, your “edge-first” architecture can become a reliability liability. Define fallback modes in advance. For example, if the edge gateway is unavailable, queue the job locally for a fixed period, then route to a regional cloud zone if the data classification permits. If the job is legally required to stay local, alert operators instead of silently breaking policy.

Safe fallback design is often overlooked because teams assume the primary route will work. In distributed GIS, that assumption is dangerous. Build the fallback logic before you need it. The resilience mindset appears in trip contingency planning and forecasting with uncertainty, both of which reward explicit backup planning.

Measuring the wrong success metrics

If you only measure compute cost, you may miss the real value of geo-aware routing. The goal is not simply to spend less. It is to reduce time to insight, preserve compliance, improve user experience, and keep the platform operational under variable conditions. Track latency, cost per job, transfer volume, fallback frequency, and policy violations together. This balanced view will tell you whether the architecture is working.

The same idea applies to product and growth systems, where one metric can mislead the entire team. That is why comparison and context matter in decision templates and why measurement discipline appears throughout technical operations. In GIS, the right scorecard prevents false wins.

10. Conclusion: Build a Routing Brain, Not Just a Processing Stack

Geo-aware flags turn GIS processing location into a first-class decision. Instead of hardcoding edge, cloud, or PaaS, you can route each job based on latency, cost, compliance, and geography. That matters because modern geospatial work is heterogeneous: satellite imagery is heavy and batch-friendly, sensor streams are continuous and latency-sensitive, and regulatory constraints often determine where data may legally travel. The organizations that win will be the ones that treat routing as policy, not plumbing.

Start with one workflow, define clear thresholds, instrument the decision path, and keep your fallback logic simple. Then expand the model to more workloads as you gather evidence. The future of real-time geoprocessing is not a single location. It is a controlled, observable, policy-driven system that can move intelligently across the edge-cloud-PaaS continuum. That is how GIS teams ship faster, spend smarter, and stay compliant while handling the volume and velocity of today’s location data.

FAQ: Geo-Aware Processing Flags in GIS

1. What is a geo-aware flag?

A geo-aware flag is a routing policy that decides where a GIS job should run based on factors like location, latency, cost, and data residency. It is more advanced than a simple boolean feature flag because it can encode multi-condition decisions and fallback behavior.

2. When should GIS work run at the edge instead of in the cloud?

Use edge processing when the job requires very low latency, must work during connectivity issues, or should minimize data movement for privacy or cost reasons. Common examples include real-time sensor alerts, drone triage, and local anomaly detection.

3. Is PaaS a good fit for geospatial processing?

Yes, especially when you want speed, standardized workflows, and managed operations. PaaS is a strong option for repeatable geospatial tasks, but it may be less flexible for custom drivers, specialized hardware, or highly tuned batch jobs.

4. How do geo-aware flags help with compliance?

They make data residency and processing-location rules explicit and auditable. Instead of depending on developer judgment, the routing policy can enforce regional restrictions and log every decision for review.

5. What are the biggest risks of using geo-aware flags?

The main risks are flag sprawl, unclear ownership, bad fallback logic, and measuring the wrong metrics. To avoid this, centralize policy management, version rules, and monitor latency, cost, transfer volume, and violations together.

Advertisement

Related Topics

#gis#edge#feature-flags
D

Daniel Mercer

Senior SEO Content Strategist

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-04-16T18:54:27.354Z