Serverless in Real Life: A Case Study in the Travel Industry (Cloud Next ’19)


[MUSIC PLAYING] STEFAN HOGENDOORN:
Welcome, everybody. Serverless in Real Life– A Case Study in the
Travel Industry. We are very much aware
that we’re probably the last session that
you’re going to do today, so we can either decide to drag
it along or rush it through it. But we’ll see how well
you respond to our jokes. So all right. JAMES DANIELS: All right. Well, I’m James Daniels. I’m an SDK engineer
on the Firebase team. And here I am in the pictures. I had a little bit more facial
hair there, but as you can see, we’re very serious people. STEFAN HOGENDOORN: Yep,
and to make sure, we added the arrow so that
you don’t confuse him for a pineapple. And sorry. My name is Stefan Hogendoorn. I’m the chief geek and CTO
at Qlouder, Qlouder CPS, and I’m also a Google Developer
Expert for Google Cloud Platform and Firebase. And yes, there’s arrows
pointing to me as well, because some people might
mistake me for the pineapple. Anyways, so imagine
that you want to go on a trip of a lifetime. For most Europeans,
that probably means going to Australia
and New Zealand. Anybody from Australia
or New Zealand, just imagine that everything
that we say about your country basically applies to you
going to Europe or whatever exotic destination
that you want to go to. But just imagine that you want
to go on a trip of a lifetime. We have a customer that has a
lot of consumers or customers that really have that dream. And that dream, that
once in a lifetime, oh, take me to that
wonderful place, and provide me with that
vacation of a lifetime. And so they really– as they say, they’re
in the industry of making dreams come true. They’re specialized in Australia
and New Zealand travels. That’s why I mentioned, if
you’re from those areas, just imagine that we’re doing
it the other way around. And just keep in mind,
it’s pretty cool, but it’s not as cool
as your country, although we don’t have stuff
that tries to kill you. JAMES DANIELS: New
Zealand– that’s where the hobbits
are from, right? STEFAN HOGENDOORN: Absolutely. JAMES DANIELS: I need to go– I need to go see that. STEFAN HOGENDOORN: I’m not
sure if those try and kill you as well, but I might have
to rewatch the movie. Anyways, so now I
mentioned that you have to make those dreams come true. Travel Essence needs quite
a bit of systems and staff to do that. It’s not like they’re
organizing vacations for, like, one family. It’s quite a few families that
they’re trying to deal with. So they need quite a bit of
systems, quite a bit of staff to do that. They need to scale up, because
they’re doing a pretty good job at what they do. They’ve got their radio
commercials going on. They’re now in different
countries in Europe, so they’re really scaling. And that kind of brings
out the challenges as well. And so that makes it
a bit more difficult to make dreams come true. So without diving
into those dreams and all the architecture
and all that, let’s first try and see how we
got started with this customer and how we introduced
them into serverless. It’s a funny story. The CEO of Travel Essence
called Andrew Morton– he always tells us
that, hey, we’re always looking to enhance– just to
enhance our systems so that we can ensure that the people that
are in our company actually speak to people, because if
you make dreams come true, while we’re all– most of
us have some engineering background. So for us, it’s
probably dreams coming true by speaking to computers. But if you’re in
the travel industry, it usually means that you want
to speak to other people that actually tell you where it’s
cool and where you need to go. So that’s what their purpose is. Speak to people. Deal with people. Now if you’re dealing with
people, that’s pretty cool. But just imagine that
you’re a customer of theirs, and you’re talking to
an agent, and you’re talking about your vacation. And the first thing
that the agent goes– like, oh, yeah, but can you
tell me your itinerary number or your customer number? You know, if you’re doing
a trip of a lifetime, you want to feel special. You don’t want the agent
going, who are you? So you know, it gets kind
of boring or annoying, and then you might have
somebody else in your party or in your family joining
you and sending them emails and say, oh, yeah, I know
that we’re going to somewhere in Australia. But can we also go
to that other place? And oh, can we also see
more animals that kill us? And then the agent
goes, like, who are you? And then somebody else
from you from your party also goes, oh, yeah,
and I want to go there, and can you tell me a bit
more about this or that? So before you know it, either
the customer or the agent will go, what am I doing? Why am I– who are these people? Or if you’re the customer, why
don’t they understand who I am, and why don’t they make
my dream come true? So that kind of– it’s kind of annoying, I guess. JAMES DANIELS: Yeah. STEFAN HOGENDOORN: So what
they ask is first thing to do is, can you get– can help us and get a
grip of the communications that we have with a customer? So fortunately, the customer of
Essence was already on G Suite. So what we did is we took
the email API for G Suite and ran a little program on
App Engine that would extract emails from the environment. We would then feed those
emails to the natural language processing API, applied
some other stuff as well. Obviously, you can do
regular expressions and all that to see travel
IDs and some other stuff that you can do
programmatically. But we also needed to
be able to interpret what was in the emails, and
really try and understand who this relates to
or to what travels or what travel parties
emails related to. So we used the NOP API with
some extra secret sauce. We then took all
that information, put it back in BigQuery, and
made sure that the staff really had a better view and a
better understanding of who they were talking to. Now, when we delivered
that, the customer was like, oh, this is great. This is pretty cool. All right, now tell us
what servers that we need to install this on. And we’re like, uh,
well, you don’t. So the customer was like,
OK, then what does it run in? We had to explain
to them serverless. So the cool thing is, we
can now say, hey, look Francis, no servers. So doing the serverless
is not something that we did accidentally,
like, oh, hey, let’s try this. But it is part of the philosophy
that we have as a company, that when you try
and build a solution, the first thing that you do is
you try and do this serverless. You start with serverless. Because if you start
with serverless, one of the very
nice things about it is that you can more or less
only focus on the functionality that you try and deliver. And we’ve been
doing this for ages. I myself have been doing App
Engine since, what, 11 years? Has it been– yeah,
something like that. And that was, for me, the
first step or the first item that I recognized
as being serverless. And that’s why, for serverless,
I use the App Engine icon. James had some
objections, but, hey. Once again, the cool
thing is, you just focus on the functionality. Now, it might sometimes be
that you try and run something on serverless or that you want
to do something on serverless, and it just doesn’t
work on serverless. You need to use that one
specific API or the customer wants you to use that
one specific library, and that just doesn’t
run on serverless. User language, because the time
it takes, and things like that. So what we then do is
we look at containers, we look at Container Engine,
and things like that, and run our code there. And if we really can’t run it
in containers, we’ll go to VMs, and then try and fix it on VMs. If it doesn’t work
there, we’re not going to run it on
an H400 or something, so we’ll just try
and redesign it. The cool thing is that if
you take this approach, most of the time, you’ll end up
with an application that is serverless, and
once again, you can focus on the functionality. And also, the cool
part is that if you start from the other end
with virtual machines, you’ll hardly ever make
it into serverless. So that’s a nice
thing to know as well. Now, I’m very passionate
about serverless. I’m so passionate
that I even have a very crude statement here. If you’re offended
by that, cool. My statement is that “if
you’re not building serverless, you’re robbing your
customers of innovation.” Because what you end
up doing is you’ll end up providing your
customer with a whole bunch of [INAUDIBLE]. And whether your customer
is an internal customer or it’s an external customer
because you’re like a Google partner like we are, if
you don’t do serverless, you’re robbing
them of innovation, because you ask them
to spend a lot of time on maintaining systems,
understanding technology. And all the customer
really wants you to do is focus
on functionality, because that’s where
the cool stuff is. JAMES DANIELS: Quoting
yourself is a real power move. Thought leader. STEFAN HOGENDOORN:
I know, I know. JAMES DANIELS:
Thank you for that. STEFAN HOGENDOORN: So
what we do we did– so to put it into context,
this was the landscape that the customer
was looking at. So coming back from– or doing something serverless
was kind of new to them. Well, at least they
felt like that. But they already had
some stuff in the cloud. So they were using G Suite,
they had a financial application that they were
running in the cloud. But the primary
systems, the stuff that really makes their
organization what it is and enables them to sell– to create for you
those magical travels, that’s really all
about legacy systems. So they had a legacy
itinerary system. They ran it in MySQL. Fortunately, they had
a bit of an insight in putting that on Cloud SQL. But that was the only
serverless thing they did. They had PHP application
logic running on some servers. They had third-party API
calls running on servers, and a lot of that. So it was really a
legacy infrastructure. Now, what we did– because we showed him that
we could do stuff serverless and we could really focus
on the functionality, there were also
like, hey, can you help us and maybe replace
some of our outdated legacy features, and replace
that with new features and new applications,
and maybe run that on that serverless
stuff as well? So that’s what we started doing. We hooked up, once again, App
Engine, our faithful workhorse, and hooked it up into
their legacy itinerary system, started getting
data out of that, and then putting that into
Firebase and into Firestore. The reason for that is
because it was easy. It has a fairly easy API. And yeah, you guessed,
it’s serverless. And it really
allows us to build, basically, an
entire application. JAMES DANIELS: So if you
look at that architecture, right, this is that modern
three-tier architecture. This is something that we’re
aiming to achieve, especially for those customers
that we’re trying to migrate into the cloud. So we have these three tiers,
this great academic separation of concerns. We have our presentation layer
with our orchestration gateway, CN, and then our clients
also on that side. They all fit in that bucket. We then have our application
logic, or code servers– hopefully on the cloud,
but they probably have stuff also still on prem. And this is also where they–
the layer which they call up to third parties. And then of course
the great data tier. And your DBAs manage this, and
it’s sacred, and no one touches the database without permission. So this also fits this model. So you have your app, you’re
talking to the compute– serverless, hopefully–
you’re doing that through load
balancers and proxies, and then that compute is
then in a trusted environment and it has administrative
access to your database. It’s talking to,
say, Cloud Firestore. As someone on the Firebase
team, I’m a little biased. But I think Firestore is a
very awesome NoSQL database, and gives you some
strong guarantees, which are very cool. So once we get into Firestore,
one of the cool things here is, actually, Firestore
is an event emitter within Google Cloud. So writes to Firestore
are an event emitter. They can actually be
monitored by Cloud Functions. You can also monitor
cloud storage buckets if you’re using Firebase
authentication or CI/CP, you can monitor user creation
events, stuff like that. And when this event happens– when this event is sourced– it can actually fire
up a Cloud Function. And Cloud Function– I’m
a JavaScript developer, so I write Node.js
scripts there. So I write Node code, and
the function executes this. It’s a serverless
infrastructure, and it spins up these
workers on demand to fulfill these event sources. And if you imagine,
this Cloud Function could then start writing
back to Firestore. So you write data in one
part of the database, there’s maybe a
mutation, a side effect, it calls a third-party
API, another GCP product. And it takes that result
and does stuff with it, and writes it back
into Firestore. And then the next time the
application reloads the data or something like that,
the server fetches it, you have all the caching,
and typical stuff. But it’s starting to get
reactive on that back end. Now, we can take
this a step further. Now that we’re farming that
logic out to reactive responses to the database writes. Why don’t we take it
that step further? We no longer need that
application server. So this is where we
can start testing the waters on this native
Firestore architecture. So the client-side
application, whether it be an iPhone, Android,
or web application, directly communicates
to Firestore. No intermediary servers,
no load balancers. It sends its data writes and
reads directly to the database. And when any mutations
happen, a Cloud Function can trigger, and then
write back to Firestore. And the cool thing
about this is you get the superpowers of Firebase. So you have a Cloud Function
respond to this event, it changes something
in the database, and the client is listening. With Firebase, when you make
a query to the database, it can be, but by default,
it’s not a one-time operation. You’re listening to snapshots. So you query, and it actually
creates a persistent socket to the database. So when the data in
the database changes, the client gets
that in real-time. All the other clients listening
to that point in the database get those streams in real-time– 10, 100 milliseconds,
somewhere around there– and it creates this magical
real-time experience where you no longer have this
pull-to-refresh model that you see in a lot of applications. Another benefit of
going down this path and using the Firebase
SDKs to access to your data is that we’ve actually
written offline capabilities into the application. You don’t have to do anything. So I showed that very academic
three-tiered web architecture. And there is a lie there. That clean separation
of concerns breaks down when your
application hits production. So that database
tier, maybe you need to make it more performant. So you add Redis and
start caching things. And well, now you
have a database at the application layer. And then you need
offline capability, so you add SQLite or local
storage for web applications. And now, again, you have data
on the presentation tier, and you need to
worry about keeping those in sync, and
contention, and detecting online versus offline. And that’s a lot of
work, a lot of code, and a lot of places where
that theoretical separation of concerns breaks
down and you get bugs. Fortunately, we’ve
done that work for you. And ultimately, this
empowers your developers to focus on the business
logic of your application, build things and ship things
faster, and hopefully have a smaller ops team. So what powers this is
breaking down this wall. The idea is that your data– data storage, data access,
pushing things to the database, reading things out– it matters to every one of
these tiers, the application, the presentation, and of
course the data layer. So the access is coequal
between all of these. And then you get the Firebase
SDK, and the tight integrations with Google Analytics and
the Firebase authentication, and then of course
Cloud Functions to respond as side effects. We jokingly call it “side
effects as a service” on the team. And ultimately,
when you write, they do operations, write back, and
that streams to the client, creating this
magical experience. Now, this is where I’ll talk
to a lot of developers that are used to doing things through
load balancers and API servers, and I start saying,
interact directly with the production
database– interact directly with it from your
client-side application. And I always get this
little horrified look, and I know what’s coming. And they’re saying, how
can this be secure at all? I’m just giving out
the keys to my database to any iPhone app, everyone
who visits my website? And the key here
is this integration with Firebase authentication
and our security rules product. So with security
rules, you can actually write rules that
limit the operations on any path in your
database, any document, to some sort of condition. And you can look at the user
doing that, the keys associated with them, and you can even look
in other places in the database and make sure, oh, they have
permission to this and that. STEFAN HOGENDOORN: All right,
so what we really have– what you’re basically
saying is that, for us, doing that serverless
thing was like one serverless step for us, but it’s
a giant leap in architecture. It’s pretty awesome. Yes, I quoted somebody
else this time. So what is nice
is that you really put that native Firestore
model, as you described it. It really works for us, and it
kind of takes away complexity that we might otherwise
have– creating a REST API on our database,
and having to go through all those
motions, and servicing that, and things like that. So really, the App
Engine that we have– or the App Engine
application that we have in the application–
is now only doing some work to get data
out of the legacy system. So that’s pretty awesome. So we can actually start to make
the application a bit simpler. And then by having
the client directly connecting with the
Firestore, or with Firebase, we have that interaction,
and we allow to users– because we have the
native connection, it allows us to build
an application where, if users are working on
an itinerary together, and that happens because
somebody might be booking a hotel, somebody else
might be booking a car, or some event that people
might want to go to, you actually see the updates
appear on your screen straight away. And it actually saves our
customer– or the agents and our customer–
quite a bit of time, and it allows them to better
interact with the system, but also, if they’re on the
phone with the customer, be sure that they
have those real-time and real live updates. No matter what the
system is doing, even if the system
in the background is booking airplane
tickets or whatever, they get the latest information. And that’s really awesome. So it allows us to
do a bit more volume handling of itinerary events. It doesn’t cause all sorts of
task explosions, where people have to do all sorts of extra
things, like hit extra buttons or do extra work. It really helps the
customer to focus on what is important in functionality. And for us, it also
helps us to focus on what the customer
really finds important. As you might have noticed,
there’s a theme here. Now, if we’re talking about– one of the very nice things is
that we’re now using Functions to handle all sorts of things. And the customer is actually
quite happy with that. And we’re looking into getting
some of the other systems that are integrating with it
and get that going as well. So they’ve got the
financial system. So hey, why not use
Firebase Cloud Functions or use Cloud Functions
to actually start writing information back
into the financial system or maybe do some of
the flight bookings. By the way, the finance
system is actually pretty cool because it has a
pretty good REST API. The car rental system– because most of the time, when
you’re in Australia and New Zealand, you want to drive
around and things like that– that has a pretty
cool API as well. It’s a SOAP API. It’s not as cool
as the REST API. But hey, that’s
still pretty good. The flight booking
system, that one’s a bit more– eh, a
bit more difficult. It’s a bit more traditional. But hey, we can deal with that. Cloud Functions are cool. We can do everything with that. Excellent. So we then might
hit a snag where we might need to use some
of that container stuff that I talked earlier about. So we do everything serverless. If we can’t do serverless,
we’ll hit containers. Because in the other
external sources, some of the things
that they want to do– because, as you can imagine,
travel is quite expensive, and as a travel
agency, you might have to make payments to hotels
and things like that upfront. So they have some fund
hedging that they do. And the cool thing is that the
bank that they’re doing that with is very advanced, and
sends them a PDF once in a while that say, oh yeah,
the exchange rate for the Australian dollar– what is it, New Zealand dollar– is this and that. And we have to get
that from the PDF. Eh, that’s not too bad. JAMES DANIELS: Now, it
sounds like you’re– you’ve touched on a
couple points here like– Functions is great. I love Functions. It has a tight
integration with Firebase. But you know, you
mentioned Container Engine, maybe some harder workloads
you put through that, because Functions has a
time-out and it’s not really meant for that. But I’m concerned about some
of these other APIs, too. So I threw Functions
at you, but let’s talk a little bit about
functions, and how they’re designed, and the
workloads that they support. So first and foremost,
functions is at least once. Cloud Functions are
designed to reliably definitely do something. When you have a write
to your database, when a file is uploaded, you
really want that job to occur. Now, I’m not a theoretical
computer scientist. I wasn’t that
diligent in school. But I work with a bunch of them. And from my understanding with
distributed systems, having anything less than at least
once, maybe exactly once, that’s a very hard problem to
do within a distributed system. So maybe you have a
network partition show up. These workers are
now out of sync, and they can’t tell if they’re
doing each other’s jobs. So Cloud Functions, to
ensure that a job occurs, will trigger more than once. Maybe you get duplicate fires. So one of the things
that we always tell people is, make sure the
side effects in Cloud Functions are idempotent. So they’re also designed
for on-demand provisioning. These servers go from
zero to infinity. They’ll scale to
meet your workload and the data
entering the system. But Cloud Functions is
designed for database writes, and storage uploads, and
end user things, basically. So a user clicks a
button or uploads a file. And these kind of– maybe they batch, and maybe
you have busy parts of the day. But ultimately, these
are kind of random events and not very
predictable otherwise. And they’re designed for
immediate fulfillment. Since these are for end user
response, when a result comes into the system, when
an action happens, we want the side effect to
happen as quickly as possible. Maybe one of the
functions is busy. We don’t want to interrupt that. We don’t want to
limit its resources. So Cloud Functions, the
currency per worker is 1. We give that job the full
resources of the container. So if you have a bunch
of events coming in, it has to spin up new functions. And if you imagine, if you
get a huge burst of activity, maybe you’ll have
a bit of a backlog. And the idea there
is just to get it done as quickly as
possible and respond to those. So going back to
this case, if you’ve worked with Strype or some of
the modern payment providers, they’re great because
they’re cloud-native APIs and they have
idempotency tokens. So you could use your Firebase
event ID or your document ID to make sure that you
don’t double-bill a client. If we’re talking
legacy APIs, they’re probably more meant for this
transactional, centralized world. So without an idempotency token,
we double-bill your customers. Not good, not good. Maybe this flight booking API is
a traditional API but very big enterprise. So maybe there’s really
harsh quotas on that. I’m not allowed to query
their API more than once every 30 seconds
because they don’t want people gaming the system. So if we have a lot of
events come in at once, we could get
throttled or banned, which is not a good
experience for our users. This SOAP API–
maybe, again, quotas. But it being SOAP, maybe
they’re expecting batch data. They’re saying, don’t hit our
API more than once every 15 seconds, but feel free to batch
a bunch of requests in one. Cloud Functions isn’t
really suited for that. It’s immediate action. It’s stateless. It’s not going to fit
well in this world. And then you mentioned
the other sources, these PDFs getting sent
to you from the bank that include the conversion
rate between currencies. That’s a lot of data. Container Engine is
definitely good there. But I imagine that that
could be rapid-fire. When currencies fluctuate,
they fluctuate rapidly. You’ll get a lot of events
from the system all at once. And again, that stepping
function, where things spin up, not necessarily great
for this use case. So now you’re backing
things up in your back end. STEFAN HOGENDOORN: Yeah. I’m starting to feel a bit
like your emojis, you know? I was quite happy. I’m like, oh yeah we’re, going
to do talk about serverless, and oh yeah, I’m really cool. And he basically now tells
me that I’m not that cool. Well, the cool thing
is now, by the way, that for the Dutch
people, you’ll recognize this as delfts
blauw, or delft blue. For the non-Dutch people, just
look it up on the internet. You’ll get it. But I noticed this is a
whole bunch of products that Google has. But we like our Cloud Functions. But how do we solve this? JAMES DANIELS: So
Cloud Functions do tie in great with Firebase. They’re our native
event sourcing. But there are a lot of products. So eeny, meeny, miny, moe. Ah ha, Dataflow. So Cloud Dataflow is a
streaming processing engine. It’s meant to take
either batches of data or large streams of
data and handle them in a functional model. So I’m a functional
programming nerd, so I really like this product. It’s fully managed
and no ops, so it does tick your box of serverless. The provisioner is
centralized, but the workers are decentralized serverless. It can spin them up. And because it’s meant
for event sources, it can be a little bit
smarter about provisioning. And it’s based off the
open-source Beam model. So comparing it to
Cloud Functions– I mentioned Cloud
Functions is at least once. You can have double fires. Now, Cloud Dataflow,
how this differs, is the events that come
into Cloud Dataflow are first passed
through a Bloom filter. So it’s going to filter
out those duplicates. And then when it
spins up the workers, you have de-duplicated data. So because this is
exactly once, you can safely handle side
effects without the need of figuring out idempotency. STEFAN HOGENDOORN:
Cool, so we can just book that flight once instead
of sending the family Jones six times on the same
flight to Australia. JAMES DANIELS: Yeah. STEFAN HOGENDOORN: Cool. JAMES DANIELS:
You would not want to, when a network
shard appears, bill your customer for
six tickets to Australia. STEFAN HOGENDOORN:
No, probably not. JAMES DANIELS: That’s
a little bit pricey. STEFAN HOGENDOORN: All right,
so this is adding to the dream. Woo hoo. JAMES DANIELS: So
with Cloud Functions, you have this
on-demand provisioning. This is meant for sort of
random, user-generated events, and it’s sort of a
stepper function. It’ll go, do I have enough
workers to fulfill this, maybe I’ll spin up a new
one as load increases, and then they’ll
slowly spin down. Whereas Dataflow, coming
from data processing pipeline and ETL pipeline
tool, it’s going to have more
predictable scaling. It can actually
look at the inflow and the outflow of the system– how long the jobs are
taking, the error rates– and it can calculate the
number of workers it needs, and it’s a highly
concurrent model. So it’s going to be able
to get those jobs done as quickly as possible while
also balancing cost for you. STEFAN HOGENDOORN: Cool. So we really get a predictable
pipeline, no backlogs and all that, and the
customer really gets to, well, almost have a certain
reassurance in delivery or execution of actions without
having to spend a ton of money on it. Cool. JAMES DANIELS: Yeah, more
predictable cost, definitely. STEFAN HOGENDOORN: Cool, cool. JAMES DANIELS: And
then Cloud Functions is designed for this
immediate, as quick as possible, “I’ll spin
up new servers if need be” fulfillment. Whereas Dataflow, being for
data batching and processing and this functional
programming model, I can run filters on my
code, I can group them, I can sort them,
I can do windows. So I can say, batch this data
by five-minute increments, or give me a sliding
10-minute window. And that way we can be a lot
more intelligent in the data that we take and we put
into our stores and process in the rest of the
system so that we don’t overwhelm our system. STEFAN HOGENDOORN: Excellent. And that really helps us with
dealing with the Car API, where we can only do so many
requests per time unit, batch those things, we’ll get
a little discount out of that– wonderful, cool. JAMES DANIELS: And also,
those PDFs from the bank– the interchanges and the
currency transfer rates are going to change rapidly. And if you’re working with
something like Firestore, there’s a quota. So there are max
rates at which you can mutate your data over time. So like in Firestore,
you can’t write to a document at a sustained
rate more than once per minute. [INTERPOSING VOICES] [CHUCKLING] STEFAN HOGENDOORN:
All right, cool. JAMES DANIELS: So now,
extrapolating that into this architecture
that we wrote out. So now we have Cloud
Functions, and we’re actually going to pipe this
data to Cloud Pub/Sub. Because Cloud
Pub/Sub is the system that works really
well with Dataflow. So it can actually ingest events
from that, if that’s its stream processing capabilities. Whereas Cloud Functions
is tightly integrated into the Firestore. Event sources. So we just pipe that along. Now, Dataflow can
do any batching, it can do any
filtering, sorting. And it can also do
these side effects that might be legacy systems. From there, it outputs
the data to Pub/Sub again. And then we can actually have– the cool thing
about Pub/Sub is we can have multiple subscribers
to a Pub/Sub stream. So we can have two
different Cloud Functions– one listening to
the Pub/Sub stream and writing back to Firestore,
and then one listening to the Pub/Sub
stream and writing any relevant information
we want back to BigQuery. So that way, if
either one of these has load problems
or errors, they’re not going to affect the other. STEFAN HOGENDOORN:
Cool, excellent. By the way, to be honest,
I was kind of joking with, oh man, the presentation
is going to pieces because we did it wrong. No customers were hurt
creating this system. This is actually the
architecture that we deliver. We took care of all
the idempotency, we took care of the batching,
we took care of all the stuff that we needed to to make a
really scalable and reliable system for this customer. And what sort of funny– depending if you were the
IT guy, probably not– but since they are in the
business of selling travel and they’re selling
dreams, and they’re not in the business of running IT
systems, what they actually did is, using the
system, they’re now are able to serve more
customers with even less IT people running them. It’s actually quite extreme. The number of IT people that
they have is exactly zero. The IT guy, by the way,
got a great new position somewhere else, and we’re
working with him very closely on that. But the really nice thing
is, it really helps us, and it really
helped the customer create a solution in which
they can run their business. And they can run their
business at scale. Because as I said
before, they’re not doing this for
one or two customers, they’re doing quite a few
million euros per year, selling travel. So now, with all
this in place, we now have a very strong
and very structured environment that we can use. And maybe it’s time
to also look at some of the future developments that
we’re seeing that we can do. So just to kind of reiterate
where we’re coming from, so the initial customer question
was, hey help us communicate, and we used that to
introduce serverless to them. We then started adding
more complex workloads to their environment,
or taking care of some of the more
complex workloads they had in their legacy
system, and moved them into a serverless environment. We integrated it with some of
the legacy systems they had, and we really made
the system more robust so that they could handle
a higher volume of traffic. Now, with that in place,
one of the future steps that we’re going to
take is obviously connect to the consumer–
so connect to their customer and make sure that that customer
has an even better experience after the booking process. And what we’ll also do is
we’re going to optimize the systems as we go along. Because one of the really cool
things about a Google Cloud Platform and the serverless
operations that we use is that there’s a constant
change and a constant update of those technologies, ever
increasing and ever allowing for better integrations and
better optimizations of code, and better developer
workstreams as well. So one of the things that
we’re going to do up next is we’re going to optimize the
container-based development workflow. So right now, for
the developers, the container
workflow was slightly different– the container
development workflow was slightly different from
the serverless workflow, where you’re just creating your
Cloud Functions and all that. Doing the containers was
a bit more difficult. And with the announcement
of Google Cloud Run that was made this
morning, we’ll be able to kind of
align the development process of the
developers when they have to do the containers,
align that a bit better with the development
workflows that they have, using Cloud Run. JAMES DANIELS: Yeah. And definitely, Cloud Run
is really cool technology. If you haven’t gone to any
sessions today, do sit in. The keynote is tomorrow. And there’s going to be
plenty of sessions coming up about this container technology. STEFAN HOGENDOORN:
Definitely check it out. And also– and you
can’t say, but I can, because he’s under
NDA, and I’m– well, I’m sort of as well. But there’s a lot of cool stuff
coming up in the Firebase realm as well that will
really help people doing serverless
application development. Or not even only in
the Firebase realm, but also in the
bigger Google Cloud realm, with a lot of
cool stuff that they’ll be announcing over the next
days, and some of the stuff at I/O as well. So I’m really looking
forward to that. One of the other things
that we’re going to do is we’re going to optimize
insights and be more predictive on the workloads. So obviously, we
have some prediction and we have optimized
workloads coming from Dataflow. But they also have their own
workflows and their own data loads that we need to deal with. And as you might have
seen in the image before, we’re writing a
lot of the information– or a lot of the
process information, we’re writing that
back to BigQuery. Now the reason why
we’re doing that is because it will
allow us to create more predictable
workloads, get a better understanding of
their customers, get a better understanding
of the travels that they’re
selling, get a better understanding of their success. So getting the information,
you can imagine, they’ll be able to even
more efficiently run their operations. And last but not
least, we’re going to build a customer-facing
application using Flutter. The nice thing about
Flutter is, for those of you who don’t know, it’s a
way of using one codebase and writing an
application that will run on both Android and iOS. And it will really
help the customer to have that
extended interaction with their customers while
the customer is traveling. And since we can use,
as James explained, that native connection
to the database and still be sure that we have
the full security in there, you can imagine that
it will be a lot easier to write those applications. Also, because Flutter
really nicely integrates with the whole Firebase product. Because Firebase
obviously has its roots in mobile development,
and you can kind of see where this is going. So these are a few very exciting
updates that we’re going to do, and it’s all updates that will
really fit the development flow that our
development should have, the workflows and the technical
understandings that they have. So we’re quite
excited about this. JAMES DANIELS: Definitely. So it sounds like you
have a great setup here. You’re definitely improving
the customer experience and reducing their workload. STEFAN HOGENDOORN:
Well, yeah, it means that because we have
serverless– as Andrew said, we have serverless, so we have
more capacity as we need it. And for us, it really means that
we don’t need more capacity. JAMES DANIELS: It’s time
for a vacation, right? You need– you’ve
been hard at work. And once you’ve
shipped these updates, you can schedule your own
vacation of a lifetime and not get charged six times
for your flight to Australia. STEFAN HOGENDOORN:
That’s true, that’s true. Yeah, yeah yeah yeah yeah. JAMES DANIELS: Cool. STEFAN HOGENDOORN: All right. JAMES DANIELS: Well, thank you. [MUSIC PLAYING]

Add a Comment

Your email address will not be published. Required fields are marked *