Build a Redis cache replica set to migrate to#1631
Conversation
|
Holding off. Need to fix a few features on Redis. |
mzeier
left a comment
There was a problem hiding this comment.
Limited context aside,
-
Dev cuts over immediately along with stage. Uses
if project.stack == 'prod' else redis_replica_group_primary_endpoint→ maybe intentional. -
Empty valueFrom: "" secret placeholders (
SECRET_APP_ADMIN_ALLOW_LIST,SECRET_DB_SECRET,SECRET_FXA_*, etc). Maybe harmless.
| .log_level: &VAR_LOG_LEVEL {name: "LOG_LEVEL", value: "ERROR"} | ||
| .log_use_stream: &VAR_LOG_USE_STREAM {name: "LOG_USE_STREAM", value: "True"} | ||
| .oids_exp_grace_period: &VAR_OIDC_EXP_GRACE_PERIOD {name: "OIDC_EXP_GRACE_PERIOD", value: "60"} | ||
| .oidc_exp_grace_period: &VAR_OIDC_EXP_GRACE_PERIOD {name: "OIDC_EXP_GRACE_PERIOD", value: "60"} |
There was a problem hiding this comment.
I think we should also change this to oidc_exp_grace_period for stage and prod for consistency
There was a problem hiding this comment.
Did I miss it? I'll double-check.
|
@mzeier dev isn't an environment that normally exists, and I tested the cutover procedure there yesterday. The reason the variables are not used is because this env is not part of our normal deployment flows and we don't run fully working services here; normally just testing infra changes. The values are empty because I would otherwise have to go and create all of those secrets, which is tedious work that didn't progress this issue at all, so I opted not to go that far with the work. And harmless because those variables are not referenced in the container definitions. |
This PR builds new Redis replica sets which we will soon migrate the Appointment backend containers to. It also outlines through commentary what the next steps are here.
Just to create some parity, I fleshed out the upper variable section of the dev config as well based on the recent stage config changes, but I haven't gone out and set all the secrets and updated the task definitions yet. That's some tedious work that's irrelevant to this issue.
I put a condition in around DNS in prod so that rolling these changes out will not result in DNS getting swapped out before I'm ready to make that change. However, with this PR, we will get these changes in prod (summary of preview):
After approval of this PR, I'll merge and apply these changes to prod. Then I will prep the next (much smaller) PR to swap DNS over. I will force a rollout of the services without changing the image and will verify both the continued functionality of the app and that we see a corresponding shift in load through the CloudWatch metrics for the caches.
Ref: #1611