Adventures in AWS: Scraping with Lambdas

Amazon Web Service (AWS) are increasingly popular in the IT business and job market. To learn more about them, I took one of the AWS certification courses from A Cloud-Guru on Udemy for 10 euros. Much expensive, but investment. There are no certification exam locations where I live but maybe some day I have a chance to try it. In the mean time, learning is a possibility regardless. The course was very nice, including some practice labs. I could probably pass the test with just that, but doing a small experiment/exercise of my own tends to make it a bit more clear and gives some concrete experience.

So I built myself a small "service" using AWS. It collects chat logs from a public internet chat-service (scraping), collects them into a database, and would provide some service based on a machine-learning model trained on the collected data. I created a Discord test server and tried scraping that to see it works, and requested a Twitter development account to try that as well later. In this post I describe the initial data collection service part, which is plenty enough for one post. Will see about the machine learning and API service on it later. Maybe an interactive Twitter bot or something.

Architecture

The high-level architecture and the different AWS services I used is shown in the following figure:

high-level architecture

The components in this figure are:

  • VPC: The main container for providing a "virtual private cloud" for me to do my stuff inside AWS.
  • AZ: A VPC is hosted in a region (e.g., eu-central, also known as Frankfurt in AWS), which has multiple Availability Zones (AZ). These are "distinct locations engineered to be isolated from failures in other AZs". Think fire burning a data center down or something.
  • Subnets split a VPC into separate parts for more fine-grained control. Typically one subnet is in one AZ for better resiliency and resource distribution.
  • Private vs public subnet: Public subnet has an internet gateway defined so you can give it a public IP addresses, access internet from within it, and allow incoming connections. Private has none of that.
  • RDS: Maria DB in this case. This is a relational database system (RDS), provided by AWS as a service.
  • S3 Endpoint: Provides direct link from the subnet to S3. Otherwise S3 access would be routed through internet. S3 is Simple Storage Service, AWS file object store.
  • Internet gateway: provides a route to the internet. Otherwise nothing in the subnet can access the internet outside VPC.
  • EC2 instance: Plain virtual machine. I used it to access the RDS with MariaDB command line tools, from inside the VPC.
  • Lambda Functions: AWS "serverless" compute components. You upload the code to AWS, which deploys, runs, and scales it based on given trigger events as needed.
  • Scraper Lambda: Does the actual scraping. Runs in the public subnet to be able to access the internet. Inserts the scraped data into S3 as a single file object once per day (or defined interval).
  • Timestamp Lambda: Reads the timestamps of latest scraped comments per server and chat channel, so Scraper Lambda knows what to scrape.
  • DB Insert Lambda: Reads the scraper results from S3, inserts them into the RDS.
  • S3 chat logs: S3 bucket to store the scraped chat logs. As CSV files (objects in S3 terms).

In the above architecture I have the Scraper Lambda outside the VPC, and the other two Lambda inside the VPC. A Lambda inside the VPC can be configured to have access to the resources within the VPC, such as the RDS database. But if I want to access the Internet from an in-VPC Lambda, I need to add a NAT-Gateway. A Lambda outside the VPC, such as the Scraper Lambda here, has access to the Internet, so it needs no specific configuration for that. But being outside the VPC, it does not have access to the in-VPC RDS, so it needs to communicate with the in-VPC Lambda functions for that.

The dashed arrows simply show possible communications. The private subnets have no route to the internet but can communicate with the other subnets. This can be further constrained by various security configurations that I look at later.

NAT-Gateway

Another option would be to use a NAT-GW (NAT-Gateway) to put the Scraper Lambda also inside the VPC, as illustrated by this architecture:

NAT-GW architecture

A NAT-GW provides access to the internet from within a private subnet by using Network Address Translation (NAT). So it routes traffic from private subnets/private network interfaces through the Internet Gateway (via a public subnet). It does not provide a way to access the private subnet from the outside, but that would not be required here. This is illustrated by the internet connection arrows in the figure, where the private subnets would pass through the NAT gateway, which would pass the traffic out through the internet gateway. But there is no arrow in the other direction, as there is no way to provide a connection interface to the private subnets from the internet in this.

A NAT-GW here could both complicate or simplify things. With NAT-GW, I could combine the Timestamp Lambda with the Scraper Lambda, and just have the Scraper Lambda read the timestamps direct from the RDS itself. This is illustrated in the architecture diagram above.

In detail, there are two ways a Lambda can be invoked by another Lambda. Asynchronous and synchronous. In synchronous, the calling Lambda waits for the results of the call before proceeding. Asynchronous just starts another Lambda in parallel, without further connection. The Scraper->Timestamp is a synchronous call as Scraper requires the Timestamp information to proceed. This is the only use for Timestamp Lambda in this architecture. So, if possible, these could be combined.

In this option, I would still keep the DB Insert Lambda separate, as it can run asynchronously on its own, reading the latest data from S3 without any direct link to anything else. In this way, I find the use of Lambdas can also help keep the independent functions separate. For me, this commonly leads to better software design.

However, a NAT-GW is a billable service, costs money and is not included in the free tier. OK, it is from about 1 euro per day up, depending on the bandwidth used. For a real company use-case, this would likely be rather negligible cost. But for poor me and my little experiments.. And this current architecture let me try some different configurations so, sort of win-win.

Service Endpoints

The following two figures illustrate the difference of data transfer when using the S3 endpoint vs not:

With S3 endpoint:

with S3 endpoint

Without the S3 endpoint:

without S3 endpoint

So how do these work? As far as I understand, both interface and gateway endpoints are based on routing and DNS tricks in the associated subnets. Again, getting into further details seems to get a bit complicated. For a gateway endpoint, such as S3 endpoint, the endpoint must be added to the subnet route table to work. But what happens if you do not add it, but do have an internet connection? My guess it, it will still be possible to connect to S3 but you will be routed through the internet. How AWS handles the DNS requests internally, and do you have visibility into the actual routes that are taken in real-time/during operation? I don’t know.

In any case, as long as using the DNS-style names to access the services, the AWS infrastructure should do the optimal routing via endpoints if available. For interface endpoints, the documents mention something called Private DNS, which seems to do a similar thing. Except it does not seem to use similar route table mappings as Gateway endpoints. I guess the approach for making use of endpoints when possible would be to use the service DNS-style names, and consistently review all route tables and other configs. As this seems like a possibly common and general problem, perhaps there are some nice tools for this but I don’t know.

It seems to me it makes a lot more sense to use such endpoint services to connect directly to the AWS services, since we are already running withing AWS. In fact, it seems strange that the communication would otherwise (and I guess before the S3 endpoints existed, the only option) by default take a detour through the internet.

Use of endpoints seems much more effective in terms of performance and bandwidth use, but also so in terms of cost. Traffic routed through the internet gets billed separately in AWS, whereas gateway endpoint traffic stays within the AWS and is thus not separately billed. Meaning, in my understanding, that the S3 endpoint is "free". So why would you not ever use it? No idea.

This is just the S3 endpoint. AWS has similar endpoints for most (if not all) its services. The S3 endpoint is a "gateway endpoint", and one other service that currently supports this is DynamoDB. Other services have what is called an "interface endpoint". Which seems to be a part of something called PrivateLink. Whatever that means.. These cost money, both for hourly use and bandwidth.

With a quick Internet search, I found the Cloudonaut page to be a bit more clear on the pricing. But I guess you never know with 3rd party sites if they are up to date to latest changes. Would be nice if Amazon would provide some nice and simple way to see pricing for all, now I find it a bit confusing to figure out.

Lambda Triggers

The AWS Lambda functions can be triggered from multiple sources. I have used two triggers here:

  • Scheduled time trigger from Cloudwatch. Triggers the Scraper Lambda once a day.
  • Lambda triggering another Lambda synchronously. The Scraper Lambda invokes the Timestamp Lambda to define which days to scrape (since previous timestamp). Synchronous simply means waiting for the result before progressing.
  • Lambda triggering another lambda asynchronously. Once the Scraper Lambda finishes its scraping task, it invokes the DB Insert Lambda to check S3 for new data, and insert into the RDS DB.

It seems a bit challenging to find a concrete list of what are all the possible Lambda triggers. The best way I found is to start creating a Lambda, hit the "triggers" button to add a new trigger for the new Lambda, and the just scroll the list of options. Some main examples I noted:

  • API Gateway (API-GW) events.
  • AWS IoT: AWS Button events and custom events (whatever that means..). Never tried the AWS IoT stuff.
  • Application load balancer events
  • Cloudwatch logs, events when new logs are received to a configured log group
  • Code commit: AWS provides some form of version control system support. This triggers events from Git actions (e.g., push, create branch, …).
  • Cognito: This is the AWS authentication service. This is a "sync" trigger, so I guess it gets triggered when authentication data is synced.
  • DynamoDB: DynamoDB is the AWS NoSQL database. Events can be triggered from database updates, in batches if desired. Again, I have not used it, just my interpretation of the documentation.
  • Kinesis: Kinesis is the AWS service for processing real-time timeseries type data. This seems to be able to trigger on the data stream updates, and data consumer updates.
  • S3: Events on create (includes update I guess) and delete of objects, events on restoring data from Glacier.
  • RRS object loss. RRS is reduced redundancy storage, with more likely chance that something is lost than on standard S3.
  • SNS: Triggers on events in the simple notifaction service (SNS).
  • SQS: Updates on an event stream in simple queue service (SQS). Can also be batched.

That’s all interesting.

Security Groups and Service Policies

To make all my service instances connect, I need to define all my VPC network, service and security configurations, etc. A security group is a way to configure security attributes (AWS describes it as a "virtual firewall"). Up to 5 (five) security groups can be assigned to an instance. Each security group then defines a set of rules to allow or deny traffic.

The following figure illustrates the security groups in this (my) experiment:

security groups

There are 3 security groups here:

  • SG1: The RDS group, allowing incoming connections to port 3306 from SG2. 3306 is the standard MariaDB port.
  • SG2: The RDS client group. This group can query the RDS Maria DB using SQL. Any regular MariaDB client works.
  • SG3: Public SSH access group. Instances in this group allow connections to port 22 from the internet.

This nicely illustrates the concept of the "group" in a security group. The 3 instances in SG2 all share the same rules, and are allowed to connect to the RDS instance. Or, more specifically, to instances in the SG1 group. If I add more instances, or if the IP addresses of these 3 instances change, as long as they are in the security group, the rules will match fine.

Similarly, if I add more RDS instances, I can put them in the same SG1 group, and they will share the same properties. The SG2 instances can connect to them. Finally, if I want to add more instances accessible over SSH, I just set them up and add them to the security group SG3. As shown by the EC2 instance in the figure above, a single instance can also be a member of multiple security groups at once.

This seems like a nice and flexible way to manage such connections and permissions in an "elastic" cloud, which I guess is the point.

Lambda Policies

There are also two instances in my architecture figure that are not in any security group as they are not in a VPC. To belong to a security group, an instance has to be inside a VPC. The Scraper Lambda and the S3 Chat Logs bucket are outside the VPC. The connection from inside the VPC to S3 I already described earlier in this post (S3 endpoints). For the Scraper Lambda, Lambda policies are defined.

In fact, all Lambdas have such access policies defined in relation to the services they need to access, including the ones inside the VPC. The in-VPC ones just need to have the associated VPC mechanisms (security groups) enabled as well, since they also fall inside the scope of the VPC. There are some default policies, such as execution permissions for the Lambda itself. But also on resources it needs to access.

These are the policies I used for each of the Lambda here:

  • Scraper Lambda:

    • Lambda Invoke: Allows this Lambda to invoke the Timestamp and DB Insert Lambdas.
    • CloudWatch Logs: Every Lambda writes their logs to AWS CloudWatch.
    • S3 put objects: Allows this Lambda to write the scraping results to the S3 Chat Logs bucket.
    • S3 list objects: Just to check the bucket so it does not overwrite existing logs if somehow run multiple times per day.
  • DB Insert Lambda:

    • CloudWatch Logs: Logging as above
    • S3 List and Get Objects: For reading new log files created by the Scraper Lambda.
    • EC2 ENI interface create, list, delete: In-VPC Lambdas work by creating Elastic Network Interfaces withing the VPC so they can communicate with other in-VPC (and ext-VPC) instances. This enables that.
  • Timestamp Lambda:

    • CloudWatch Logs: Logging as above
    • EC2 ENI interfaces for in-VPC Lambda, as above.

As these show, the permissions can be defined at very granular level or at a higher level. For example, full access to S3 and any bucket, or read access to specific files in a specific bucket. Or anything in between.

Backups and Data Retention

One thing with databases is always backups. With AWS RDS, there are a few options. One is the standard backups offered by Amazon. Your RDS gets snapshotted to S3 daily. How this actually works sounds very simple but gets a bit complicated if you really try to understand it. What doesn’t…

So the documentation says "The first snapshot of a DB instance contains the data for the full DB instance" and further "Subsequent snapshots of the same DB instance are incremental, which means that only the data that has changed after your most recent snapshot is saved.".

Sounds great, doesn’t it? But think about it for a minute. You can set your backup retention period to be between 0 to 35 days (currently anyway). Default being 7. Now imagine you live all the way up to the 8th day when your first day backup expires. Consider that only day 0 was a full backup snapshot, and day 0 expires. Does everything else then build incremental snapshots on top of a missing baseline after day 0 expires?

Luckily, as usual, someone thought about this already and StackOverflow comes to the rescue. An RDS "instance" as referred to in the AWS documentation must be referencing the EC2 style VM instance that hosts the RDS. So the backup is not just data but the whole instance. And the instance is stored on Elastic Block Storage (EBS). I interpret this to mean you are not really backing up the database but the EBS volume where the whole RDS system is on. And then you can go read up on how AWS manages the EBS backups. Which mostly confirms the StackOverflow post.

Regarding costs, if it is an "instance snapshot", does the whole instance size count towards the cost? I guess it not, as you get the same size of "free" backup storage as you allocate for your RDS. In the free tier you get up to 20GB of RDS storage included, and by definition also up to 20GB of free RDS backup snapshot space. The free backup size always matches the storage size. If this included the whole instance in the calculation, the operating system and database software would likely take many GB already. But what do I know. As for where are the snapshots stored? Can I go have a look? Again, StackOverflow to the rescue. It is in an S3 bucket, but that bucket is not in my control and I have no way to see in it.

In any case, you get your RDS size worth of backup space included, whatever the size here means (instance vs data). And if you use the default 7 days period, it means you will have to fit all the 7 incremental snapshots in that space if you do not wish to pay extra. And the snapshots are stored as "blocks", and only the ones not referenced by existing snapshots are not deleted when an old (or the origin) snapshot is deleted. So expiring day 0 does not cause the incremental delta snapshots to break. It just deletes the blocks not referenced after the expired date.

Still, there is more. The AWS documentation on backup restoration mentions you can do point-in-time restoration typically up to 5 minutes before current time. If automated snapshots are only taken once per day, what is this based on? It is because AWS also uploads the RDS transaction logs to S3 every 5 minutes.

Beyond regular backups, there is the multi-AZ deployment of RDS, and read replicas. Both of those links contain a nice comparison table between the two. The multi-AZ is mainly for disaster recovery (DR), doing automated failover to another availability zone in case of issues. A read replica allows scaling reads via multiple asynchronously synced copies that can be deployed across regions and availability zones. A read-replica can also be manually promoted to master status. It all seems to get complicated, deeper than I wanted to go here. I guess you need an expert on all this to really understand what gets copied where, when, and how secure the failover is, or how secure the data is from loss in all cases.

After this hundred lines of digressing from my experiment, what did I actually use? Well I used a single-AZ deployment of my RDS and disabled all backups completely. Nuts, eh? Not really, considering that my architecture is built to collect all the scraped data into S3 buckets, and from there inserted into RDS by another Lambda function. So all the data is actually already backed up in S3, in form of the scraped files. Just re-run the import from the beginning for all the scraped files if needed (to rebuild the RDS).

Given how the RDS backups and my implementation all depend so heavily on S3, it seems very relevant to understand the storage and replication, reliability, etc. of S3.

S3 Reliability and Lifecycle Costs

The expected durability for S3 objects is given as 99.999999999%. AWS claims about 1 in 10 million objects would be lost during 10000 years. Not sure how you might test this. However, this is defined to hold for all the S3 tiers, which are:

  • standard: low latency and high throughput, no minimum storage time or size, survives AZ destruction, 99.99% availability
  • standard infrequently access (IA): Same throughput and latency as standard, lower storage cost but higher retrieval cost. 99.9% availability target, 30 days minimum billing.
  • one zone IA: same as standard IA, but only in one AZ, slighty cheaper storage, same retrieval cost, 99(.5)% availability, 30 days minimum billing
  • intelligent tiering: for data with changing access patterns. Automatically moves data to IA tier when not accessed for 30 days, and back to standard when accessed. 99.9% availability target, 30 days minimum billing.
  • glacier: very low storage price, higher retrieval prices (tiered on how fast you want it), 99.99% availability, 90 days minimum,
  • glacier deep archive: like glacier but slower and cheaper, 99.9% availability target, 180 days minimum.

Naturally some of these different tiers make more sense at different times of the object lifecycle. So AWS allows you to define automated transitions between them. These are called Lifecycle Rules. AWS examples are one good way to explain them.

Free tier does not seem to include other than some use of the S3 standard tier, but just to try this out in a bit more realistic fashion, I defined a simple lifecycle pipeline for my log files as illustrated here:

s3 lifecycle

I did not actually implement the final Glacier transition, as it has such a long minimum storage time and I want to be able to terminate my experiment in a shorter duration.

It is also possible to define prefix filters and tag filters to select the objects for which the defined rules apply to. Prefix filters can be such as "logs1/" to match all objects that are placed under the "logs1" folder. S3 does not actually have real folder-style hierarchical structure, but naming files/objects like this makes it treat them like "virtual folders". So I defined such a prefix filter, just because it is nice to experiment and learn. Besides filename filters, one can also define tag filters in the form of key/value pairs for tags.

So my defined S3 lifecycle rules in this case, and reasoning for them:

  • Transition from standard to standard-IA after 30 days. Let’s me play with the data if the import has issues for a few days. After that it should be in the RDS and just keep it around in cheaper tier "just in case". Well that and 30 days was the minimum AWS allowed me to set.
  • Filter by the prefix "logs1/" as that was the path I used. I used the path simply to give me some granular control over time, as this allowed simple time-based filtering in the API queries if I would use "logs2/" after a year or so. Would need to update this transition rule then, or simply set it to "log" prefix at that time.
  • I did not define data expiration time. The idea would be to use this type of data for training machine learning systems. You would want to maximize that data, and enable experiments later. Not that I expect to build such real systems on all this, but in a real scenario I think this would also make sense, so trying to stay there.
  • Transition to Glacier maybe after 2 months? Just a possibile though. No idea. But some discussions online led me to understand there is a minimum time interval before one can "Glacier" objects. Similar to the 30 day minimum I hit on S3 standard -> S3 IA. If it is also 30 days for Glacier, that could make it 30+30 = about 2 months.

Random Thoughts

Initially I though security groups were a bit weird and unnecessarily complex, compared to fiddling with your private computers and networks. But with all this, I realized the "elastic" nature of AWS actually fits this quite well. It allows the security definitions to live with the dynamic cloud via the group associations.

Regarding this, Network Access Control Lists (NACL) would allow more fine-grained traffic rules at subnet levels. This is still a bit fuzzy part for me, as I did not need to go into such details in my limited experiment. But in skilled hands it seems to be quite useful. Maybe more for a security/network specialist.

Lambda functions that are not explicitly associated to the VPC are still associated to a VPC. This is simply some kind of "special" [AWS controlled VPC](to a VPC). Makes me wonder a bit about what are all the properties of this "special" VPC, but I guess I just have to "trust AWS" again.

Looking at the architecture I used, the only thing in the public subnet is the EC2 instance I used to run my manual SQL queries against my RDS instance. Could I just drop the EC2 instance and as a result delete the whole public subnet and the Internet Gateway? The "natural" way that comes to mind is having access through an internet-connected Lambda function. An internet search for "AWS lambda shell" gives some interesting hits.

Someone used a Lambda shell to get access into the Lambda runtime environment, and download the AWS runtime code. Another similar experiment is hosted on Github, providing full Cloudformation templates to make it run as well. Finally, someone set up a Lambda shell in a browser, providing a bounty for anyone who manages to hack their infrastructure. Interesting..

On a related note, a NAT GW always requires an IGW. So if I wanted a private subnet to have access to the internet, I would still need a public subnet, even if it was otherwise empty besides the NAT-gateway. And while the NAT-GW is advertised as autoscaling and all that, I still need a NAT GW in each AZ used. But just one per AZ, of course.

Something I got very confused about are the EC2 instance storage types. There is instance store, which is "ephemeral", meaning it disappears when the VM is stopped or terminated. I always thought of this as a form of a local hard disk. Similar to how my laptop has a hard disk (or SSD…) inside. And the AWS docs actually describe it similarly as "this storage is located on disks that are physically attached to the host computer". Not too complicated?

But what is the other main option, Elastic Block Store (EBS)? The terms "block" and "storage" or "disk" bring to my mind the traditional definition of hard disks, with sectors hosting blocks of data. But it makes no sense, as EBS is described as virtual, replicated, highly available, etc.. A basic internet search also brings up similar, rather ambiguous definitions.

Some searching later, I concluded this is referring to the networked disk terminology, where block storage seems to have its own definition. So racks of disks, connected via Storage Array Network (SAN) technologies. As AWS advertises this with all kinds of fancy terminology, I guess it must be quite highly optimized, otherwise networked disks spreading data across physical hosts would seem slow to me. Probably just taking some of the well refined products in the space and turning it into an integrated part of AWS, as an "Elastic" branded service. But such is progress, it’s not the 80s disks as it used to be, Teemu. This definition makes sense considering what it is supposed to be as well. While the details are somewhat lacking, a Quora post provided some interesting insights.

I used RDS in this experiment to store what is essentially natural language text data. This is not necessarily the best option for this type of data. Rather something like Elasticsearch and its related AWS hosted service would be a better match. I simply picked RDS here to give myself a chance to try it out as a service, and since I don’t expect to store gigabytes of text in my experiment, or run complex searches over it. SQL is quite a simple and well tried out language, so it was easy enough for me to play with it and see everything was working. However, for a more real use case I would transition to Elasticsearch.

Despite all the fancy stuff, "Elastic" services, and all, some things on the AWS platform are surprisingly rigid still. I could not find a way to rename a change descriptions for Lambda functions or Security Groups. It seems I am not the only one who has thought about either of these over time. No wonder, as it seems rather basic functionality.

Overall, with my experience setting all this up, I used to think DevOps meant DEVelopment and OPerationS working closely together. Looking at this as well as all the trendy "infrastructure as code" lately, I am leaning more towards the side of DevOps referring to making the developers do the job of operations in addition to developing the system/service/program running on the infrastructure.. That’s just great…

Next I will look into making use of this type of collected data to build some service on AWS. Probably an API-Gateway and Lambda based service exposing a machine learning based model trained on the collected training data. In this case, I think I will look into using existing Twitter datasets to avoid considerations in all the aspects of actual data collection and use. But that would be another post, and I hope to also look into another one for how one might set up a dev/test environment for AWS style services. Later…

The code for this post is available on my related Github project. More specifically, the Lambda functions are at:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s