Welcome to episode 226 of the Cloud Pod podcast – where the forecast is always cloudy! This week Justin, Matt and Ryan chat about all the news and announcements from Google Next, including – surprise surprise – the hot topic of AI, GKE Enterprise, Duet, Co-Pilot, Code Whisperer and more! There’s even some non-Next news thrown into the episode. So whether you’re interested in BART or Bard, we’ve got the news from SF just for you.
In general news this week, there is the release of Code Llama, Meta’s new large language model designed to assist in coding. It’s challenging other AIs like Github Copilot and Duet AI. These models offer different capacities – 7B, 13B, and 34B parameters, housing billions of tokens of code-related data.
The 7B and 13B models excel in real-time code completions, due to their unique training. Meanwhile, the 34B version stands as the best choice for fine-tuning and advanced coding assistance. Training recipes for Code Llama are available on the Github Repository.
In other developments, OpenTofu announced its fork of Terraform, following HashiCorp’s choice to adopt a BSL license. The OpenTF team plans to release their version soon, amid mixed reactions. To keep up with their progress, visit their public repository.
Switching gears to AWS, they’ve rolled out a new feature in Amazon Glacier that’s catching attention. This feature lets users create a write-once-read-many archive storage, equipped with various compliance controls to back this crucial record retention scenario.
However, exercise caution; setting up an inappropriate policy can lock your data away, making it undeletable for a considerable period. Once a policy is enacted, it’s firmly in place, unable to be overwritten or deleted, with Glacier staunchly enforcing the rules and safeguarding your records according to the controls established.
Did you know Amazon had a managed Flink service? Neither did we! Alongside other updates, AWS has decided to rename the Amazon Kinesis Data Analytics to the more straightforward Amazon Managed Service for Apache Flink, hopefully making it easier for users to discover the service. By several accounts, this change bodes well with users.
Also, AWS Compute Optimizer has rolled out support for licensing cost optimization for SQL Server, making recommendations like downgrading your SQL server edition to standard or BYOL licensing. However, this doesn’t seem like the best recommendation without more understanding and context.
The Google Cloud Next ’23 event was a whirlwind of AI-themed sessions and announcements. Highlights included the unveiling of titanium-backed hardware to enhance AI processing speeds, and the introduction of the Cloud TPU V5e.
The revamped GKE Enterprise, formerly known as Anthos in many aspects, now includes features such as multi-cluster “fleets,” enhanced security protocols, and hybrid and multi-cloud support, promising a seamless user experience across data centers and cloud hypervisors.
The launch of the Cross-Cloud Network was another significant announcement, a platform designed to enhance application connectivity across different clouds, promising reduced latency and improved security. This platform has convinced industry giants like Yahoo to migrate its backend to Google Cloud.
Further advancements from Google included the introduction of Duet AI, a tool integrated into various Google services for task automation, and updates to Vertex AI, featuring new models and tools to enhance user security and experience.
Azure has a couple of shiny updates, including the newly introduced Trusted Launch, a default feature for VMs deployed via the Azure portal. This addition is a significant stride in securing Azure VMs, granting administrators the ability to deploy virtual machines fortified with verified and signed bootloaders, OS kernels, and boot policies. While we were left scratching our heads a bit at why this took a while to materialize, the consensus is “better late than never”.
Moving on, we explored the exciting launch of the “Jobs” feature in Azure Container Apps, which has now transitioned from its preview phase at Build to being generally available.
This feature offers three kinds of triggers: Manual, scheduled, and event-driven. The feature paves the way for a range of applications including executing a one-time containerized data migration job or orchestrating scheduled recurring batch jobs. It’s particularly adept at facilitating operations like CI/CD build processes, easily integrating with platforms such as Azure Pipelines and GitHub action runners.
And that is the week in the cloud! Check out our recent episode at the link below.