
[{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/tags/argocd/","section":"Tags","summary":"","title":"Argocd","type":"tags"},{"content":" I recently set up a GitOps workflow for a few Python microservices I\u0026rsquo;ve been working on, and I wanted to share how I did it. The whole thing turned out to be pretty clean once the pieces clicked together, so hopefully this saves you some time if you\u0026rsquo;re going down the same path.\nWhat I Started With # I had three Python applications (let\u0026rsquo;s call them application (a), application (b), and application (c)) that I built as part of a platform project. Each one had its own Dockerfile, and I built and pushed all three Docker images to my self-hosted Nexus registry. So at this point, the container images were sitting in Nexus, ready to be pulled by Kubernetes.\nNext, I created a Helm chart for each application. Nothing fancy (just the standard deployment, service, ingress, and the usual Kubernetes resources). I packaged the charts and pushed them to the Helm repository in Nexus as well. So now Nexus was hosting both my Docker images and my Helm charts.\nThe missing piece was: how do I actually deploy these to my Kubernetes cluster in a repeatable, Git-driven way? That\u0026rsquo;s where ArgoCD and GitOps come in.\nThe Idea Behind the Setup # Instead of running helm install manually or writing CI/CD pipeline steps that talk to Kubernetes directly, I wanted a setup where I just push config to a Git repo and ArgoCD takes care of the rest. The repo wouldn\u0026rsquo;t contain any Helm charts (those are already in Nexus). It would only contain ArgoCD resources and the Helm values files for each app and environment.\nHere\u0026rsquo;s the repo structure I ended up with:\nplatform-gitops/ └── argocd/ ├── root.yaml ├── applicationsets/ │ ├── dev-services.yaml # ApplicationSet for dev │ └── prod-services.yaml # ApplicationSet for prod └── values/ ├── application-a/ │ ├── dev.yaml │ └── prod.yaml ├── application-b/ │ ├── dev.yaml │ └── prod.yaml └── application-c/ ├── dev.yaml └── prod.yaml Three layers:\nA single root Application that bootstraps everything. Two ApplicationSets that dynamically generate ArgoCD Applications (one for dev, one for prod). Plain Helm values files, one per app per environment. Let me walk through each piece.\nStep 1 (The Root Application) # This is the only thing you apply manually (CLI or Argo CD UI). Once it\u0026rsquo;s in the cluster, ArgoCD watches the applicationsets/ directory in your Git repo and keeps everything in sync from there.\napiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: platform-gitops namespace: argocd spec: project: default source: repoURL: git@github.com:your-org/platform-gitops.git path: argocd/applicationsets/ targetRevision: main destination: server: https://kubernetes.default.svc namespace: argocd syncPolicy: automated: selfHeal: true prune: false Before this works, ArgoCD needs credentials for both the private Git repository and the private Helm repository. If those repositories are not already registered in ArgoCD, the Applications will be created but won\u0026rsquo;t be able to sync.\nSave this as argocd/root.yaml and apply it once:\nkubectl apply -f argocd/root.yaml I set selfHeal to true so that if anyone manually changes something in the cluster, ArgoCD reverts it (a bit like how an agent-based configuration management system such as Puppet keeps correcting drift when the live system no longer matches the catalog you defined). And I intentionally left prune as false on this root app (if the Git source has a hiccup, I don\u0026rsquo;t want ArgoCD to delete all my ApplicationSets in one go).\nStep 2 (ApplicationSets for Dev and Prod) # An ApplicationSet is basically a template that generates multiple ArgoCD Applications. I used the list generator, which means I explicitly list each service and its chart details. It\u0026rsquo;s straightforward and easy to reason about.\nDev ApplicationSet # apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: dev-services namespace: argocd spec: generators: - list: elements: - name: application-a helmRepoURL: https://nexus.example.com/repository/helm-charts/ chart: application-a chartVersion: 0.1.0 namespace: dev - name: application-b helmRepoURL: https://nexus.example.com/repository/helm-charts/ chart: application-b chartVersion: 0.1.0 namespace: dev - name: application-c helmRepoURL: https://nexus.example.com/repository/helm-charts/ chart: application-c chartVersion: 0.1.0 namespace: dev template: metadata: name: \u0026#34;{{name}}-dev\u0026#34; spec: project: default sources: - repoURL: \u0026#34;{{helmRepoURL}}\u0026#34; chart: \u0026#34;{{chart}}\u0026#34; targetRevision: \u0026#34;{{chartVersion}}\u0026#34; helm: valueFiles: - $values/argocd/values/{{name}}/dev.yaml - repoURL: git@github.com:your-org/platform-gitops.git targetRevision: main ref: values destination: server: https://kubernetes.default.svc namespace: \u0026#34;{{namespace}}\u0026#34; syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true Prod ApplicationSet # The prod version looks almost the same. The only differences are the environment name in the generated app names, the namespace, and which values file gets used:\napiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: prod-services namespace: argocd spec: generators: - list: elements: - name: application-a helmRepoURL: https://nexus.example.com/repository/helm-charts/ chart: application-a chartVersion: 0.1.0 namespace: prod - name: application-b helmRepoURL: https://nexus.example.com/repository/helm-charts/ chart: application-b chartVersion: 0.1.0 namespace: prod - name: application-c helmRepoURL: https://nexus.example.com/repository/helm-charts/ chart: application-c chartVersion: 0.1.0 namespace: prod template: metadata: name: \u0026#34;{{name}}-prod\u0026#34; spec: project: default sources: - repoURL: \u0026#34;{{helmRepoURL}}\u0026#34; chart: \u0026#34;{{chart}}\u0026#34; targetRevision: \u0026#34;{{chartVersion}}\u0026#34; helm: valueFiles: - $values/argocd/values/{{name}}/prod.yaml - repoURL: git@github.com:your-org/platform-gitops.git targetRevision: main ref: values destination: server: https://kubernetes.default.svc namespace: \u0026#34;{{namespace}}\u0026#34; syncPolicy: automated: prune: true selfHeal: true syncOptions: - CreateNamespace=true How Multi-Source Apps Work # The key thing to notice is the sources block. Each generated Application has two sources:\nThe first source pulls the Helm chart from Nexus (where I pushed my charts earlier). The second source points to this same Git repo with ref: values, which creates a $values alias. When ArgoCD sees $values/argocd/values/application-a/dev.yaml, it knows to fetch the chart from the Helm repo but overlay it with values from that file in Git. This is what makes the separation work (charts live in Nexus, config lives in Git). Just keep in mind that ArgoCD still needs access to both repositories up front.\nStep 3 (Per-Environment Values Files) # These are just standard Helm values files. Each app gets a dev.yaml and a prod.yaml with the overrides for that environment. In these examples, I assume the referenced secrets already exist in the cluster and are managed separately.\nDev values for application (a) # replicaCount: 1 imagePullSecrets: - name: registry-credentials ingress: enabled: true className: traefik hosts: - host: dev-app-a.example.com paths: - path: / pathType: Prefix tls: - secretName: dev-app-a-tls hosts: - dev-app-a.example.com Prod values for application (a) # replicaCount: 2 imagePullSecrets: - name: registry-credentials ingress: enabled: true className: traefik hosts: - host: prod-app-a.example.com paths: - path: / pathType: Prefix tls: - secretName: prod-app-a-tls hosts: - prod-app-a.example.com Pretty simple. Prod gets more replicas and a different hostname. You can add whatever else your chart supports here (resource limits, environment variables, autoscaling rules, and so on).\nStep 4 (Adding a New Application) # This is where the whole setup pays off. Say I build a fourth Python service, containerize it, push the image to Nexus, create a Helm chart for it, and push that to Nexus too. To get it deployed through ArgoCD, I only need to:\nAdd a new entry to the list generator in both ApplicationSets: - name: application-d helmRepoURL: https://nexus.example.com/repository/helm-charts/ chart: application-d chartVersion: 0.1.0 namespace: dev Create the values directory and files: mkdir -p argocd/values/application-d Then write dev.yaml and prod.yaml with the appropriate overrides.\nCommit and push. That\u0026rsquo;s it. No clicking around in UIs, no running helm install, no updating CI pipelines. ArgoCD picks up the change from Git and creates the new Application automatically.\nStep 5 (Applying and Verifying) # Once your repo is ready and you\u0026rsquo;ve applied the root app:\nkubectl apply -f argocd/root.yaml You can check that everything came up:\nkubectl get applicationsets -n argocd kubectl get applications -n argocd You should see six Applications (three per environment), all synced and healthy.\nA Few Practical Notes # Pin chart versions. Don\u0026rsquo;t use wildcards like *. Use a fixed version so you always know what is running, and test it in dev before prod.\nKeep prune off on the root app. If your Git source is broken or temporarily empty, you don\u0026rsquo;t want ArgoCD deleting all your ApplicationSets by mistake.\nTurn on selfHeal. If someone changes something by hand in the cluster, ArgoCD will bring it back to what is in Git.\nThink about how prod should sync. In this example, prod auto-syncs from main to keep things simple. In a stricter setup, you might prefer manual sync, a release branch, or tags.\nUse CreateNamespace. It lets ArgoCD create the dev and prod namespaces for you if they don\u0026rsquo;t exist yet.\nWrapping Up # This setup gave me a simple way to deploy my apps with Git. I build and push the image, publish the Helm chart, update the values or chart version in the GitOps repo, and ArgoCD takes care of the rest.\nOnce the structure is in place, adding a new service is quick.\nI left a few production topics out on purpose. I didn\u0026rsquo;t cover AppProjects, secrets management, or access and SSO in this post. I\u0026rsquo;ll write separate posts about those, including ArgoCD with Keycloak. If you want to look into that now, the official ArgoCD Keycloak guide is a good place to start.\n","date":"29 March 2026","externalUrl":null,"permalink":"/posts/argocd-gitops-deployment/","section":"Posts","summary":"Learn how to structure an ArgoCD GitOps repository with ApplicationSets, multi-source Helm applications, and environment separation.","title":"ArgoCD GitOps Deployment in Easy Steps","type":"posts"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/categories/devops/","section":"Categories","summary":"","title":"Devops","type":"categories"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/tags/devops/","section":"Tags","summary":"","title":"Devops","type":"tags"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/tags/gitops/","section":"Tags","summary":"","title":"Gitops","type":"tags"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/tags/helm/","section":"Tags","summary":"","title":"Helm","type":"tags"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/tags/kubernetes/","section":"Tags","summary":"","title":"Kubernetes","type":"tags"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/posts/","section":"Posts","summary":"","title":"Posts","type":"posts"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/","section":"Reliability at Scale","summary":"","title":"Reliability at Scale","type":"page"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","date":"28 September 2025","externalUrl":null,"permalink":"/tags/automation/","section":"Tags","summary":"","title":"Automation","type":"tags"},{"content":"","date":"28 September 2025","externalUrl":null,"permalink":"/tags/debian/","section":"Tags","summary":"","title":"Debian","type":"tags"},{"content":"","date":"28 September 2025","externalUrl":null,"permalink":"/tags/modsecurity/","section":"Tags","summary":"","title":"Modsecurity","type":"tags"},{"content":"mtail is Google\u0026rsquo;s excellent log parser that extracts metrics from application logs for monitoring. Instead of writing complex log processing scripts or paying for expensive log analytics tools, mtail lets you define simple patterns that automatically convert log entries into Prometheus‑compatible metrics.\nIn this guide, we\u0026rsquo;ll walk through installing and configuring mtail manually on a Linux system.\nWhat is mtail? # mtail reads log files in real time and applies user‑defined programs to extract metrics. Prometheus can then scrape these metrics. It\u0026rsquo;s particularly useful for:\nConverting web server access logs to request metrics Extracting error rates from application logs Monitoring database performance from log files Creating business metrics from custom application logs Step 1: Create required directories # Set up the directory structure mtail needs:\nsudo mkdir -p /var/lib/mtail sudo mkdir -p /etc/mtail/progs sudo mkdir -p /var/log/mtail sudo mkdir -p /usr/local/bin Step 2: Download and install mtail # Download the latest release from GitHub:\nmkdir /tmp/mtail-download cd /tmp/mtail-download # Download mtail (replace 3.0.3 with latest version) wget https://github.com/google/mtail/releases/download/v3.0.3/mtail_3.0.3_linux_amd64.tar.gz tar -xzf mtail_3.0.3_linux_amd64.tar.gz sudo cp mtail /usr/local/bin/ sudo chmod 755 /usr/local/bin/mtail rm -rf /tmp/mtail-download Step 3: Create your first mtail program # Let\u0026rsquo;s create a simple program to parse nginx access logs. Open /etc/mtail/progs/nginx.mtail with your favorite editor, then add these configurations:\ncounter nginx_status_codes by code /^(\\S+) \\S+ \\S+ \\[[^]]+\\] \u0026#34;\\S+ \\S+ \\S+\u0026#34; (?P\u0026lt;code\u0026gt;\\d{3}) / { nginx_status_codes[$code]++ } Step 4: Create a systemd service # Create the systemd service file at /etc/systemd/system/mtail.service:\n[Unit] Description=mtail log parser Documentation=https://github.com/google/mtail After=network.target [Service] Type=simple # You can run it as the mtail user, but this user must have access to the log files. # User=mtail # Group=mtail ExecStart=/usr/local/bin/mtail \\ -logs /var/log/nginx/access.log \\ -progs /etc/mtail/progs \\ -port 3903 \\ -log_dir /var/log/mtail \\ -emit_prog_label \\ -emit_metric_timestamp \\ -logtostderr Restart=always RestartSec=5 StandardOutput=journal StandardError=journal SyslogIdentifier=mtail # Security settings NoNewPrivileges=true PrivateTmp=true ProtectSystem=strict ProtectHome=true ReadWritePaths=/var/log/mtail [Install] WantedBy=multi-user.target Step 5: Start and enable the service # sudo systemctl daemon-reload sudo systemctl enable mtail sudo systemctl start mtail # Check status sudo systemctl status mtail Step 6: Verify it\u0026rsquo;s working # Check that mtail is running and serving metrics:\n# Check if mtail is listening on port 3903 sudo netstat -tlnp | grep 3903 # Fetch metrics curl -s http://localhost:3903/metrics | grep nginx ","date":"28 September 2025","externalUrl":null,"permalink":"/posts/mtail-turn-logs-into-metrics/","section":"Posts","summary":"","title":"Mtail: Turn Your Logs into Prometheus Metrics","type":"posts"},{"content":"We used to spin up a temporary MySQL container, pass a plain password into it, and extract the generated caching_sha2_password hash to commit into our automation code (e.g., Puppet/Ansible). That workflow is brittle, slow, and leaks secrets into shell history.\nNew API and web tool # I\u0026rsquo;ve replaced that process with a simple API that returns a valid MySQL caching_sha2_password hash for a given input. There’s also a minimal single‑page UI to generate hashes quickly without touching MySQL.\nHash Generator Swagger Use these instead of manually running MySQL locally.\n","date":"28 September 2025","externalUrl":null,"permalink":"/projects/mysql-sha2-hash-generator/","section":"Projects","summary":"","title":"Mysql Caching sha2 Hash Generator","type":"projects"},{"content":"","date":"28 September 2025","externalUrl":null,"permalink":"/tags/nginx/","section":"Tags","summary":"","title":"Nginx","type":"tags"},{"content":"An automated build and packaging system for NGINX integrated with ModSecurity v3 and the Core Rule Set (CRS). Supports Ubuntu 20.04/22.04/24.04 and Debian 11/12 with automated packaging and Cloudsmith repository distribution.\nFor detailed installation instructions, configuration examples, and technical documentation, see the GitHub repository.\n","date":"28 September 2025","externalUrl":null,"permalink":"/projects/nginx-modsecurity-packages/","section":"Projects","summary":"Automated build and packaging system for NGINX with ModSecurity v3 integration across multiple Ubuntu and Debian versions.","title":"NGINX ModSecurity Packages","type":"projects"},{"content":"","date":"28 September 2025","externalUrl":null,"permalink":"/tags/packaging/","section":"Tags","summary":"","title":"Packaging","type":"tags"},{"content":"Welcome to my active projects. New projects will appear here.\n","date":"28 September 2025","externalUrl":null,"permalink":"/projects/","section":"Projects","summary":"","title":"Projects","type":"projects"},{"content":"","date":"28 September 2025","externalUrl":null,"permalink":"/categories/security/","section":"Categories","summary":"","title":"Security","type":"categories"},{"content":"","date":"28 September 2025","externalUrl":null,"permalink":"/tags/security/","section":"Tags","summary":"","title":"Security","type":"tags"},{"content":"","date":"28 September 2025","externalUrl":null,"permalink":"/tags/ubuntu/","section":"Tags","summary":"","title":"Ubuntu","type":"tags"},{"content":"Install Nginx with ModSecurity on Debian and Ubuntu\nModSecurity is a powerful open-source Web Application Firewall (WAF) that helps protect web applications from various attacks. This guide will show you how to install and configure ModSecurity with Nginx using a Nginx repository for Debian and Ubuntu.\nStep 1: Add Nginx Repository # Run the following commands to add the Nginx repository that contains the ModSecurity module and then install Nginx:\ncurl -1sLf \\ \u0026#39;https://dl.cloudsmith.io/public/nginx/modsecurity/setup.deb.sh\u0026#39; \\ | sudo -E bash sudo apt update sudo install nginx Make sure that you\u0026rsquo;ve installed the right version of nginx by running a nginx -V command. in the output, you should be able to see add-module=/usr/src/modsecurity\nnginx -V nginx version: nginx/1.26.3 built by gcc 9.4.0 (Ubuntu 9.4.0-1ubuntu1~20.04.2) built with OpenSSL 1.1.1f 31 Mar 2020 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-http_v3_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt=\u0026#39;-g -O2 -fdebug-prefix-map=/nginx/nginx-latest=. -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC\u0026#39; --with-ld-opt=\u0026#39;-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie\u0026#39; --add-module=/usr/src/modsecurity Step 2: Configure ModSecurity # The Nginx package includes OWASP configurations, so you only need to add the following settings to your server block to enable ModSecurity:\nmodsecurity on; modsecurity_rules_file /etc/nginx/modsec/main.conf; For example:\nserver { listen 80; server_name localhost; modsecurity on; modsecurity_rules_file /etc/nginx/modsec/main.conf; location / { root /usr/share/nginx/html; index index.html index.htm; } } By default, the ModSecurity works in the DetectionOnlyMode so open the ModSecurity configuration with your text editor and set the SecRuleEngine parameter to On to block malicious requests:\nsudo vim /etc/nginx/modsec/modsecurity.conf grep SecRuleEngine /etc/nginx/modsec/modsecurity.conf SecRuleEngine On Step 3: Restart Nginx # To apply the changes, restart or reload Nginx:\nsudo nginx -t sudo systemctl restart nginx Step 4: Test ModSecurity # Try accessing your server with a few blocked requests:\ncurl -v \u0026#34;http://localhost/?q=\u0026lt;script\u0026gt;alert(1)\u0026lt;/script\u0026gt;\u0026#34; # XSS curl -v \u0026#34;http://localhost/?cmd=ls%20-ltr%20/\u0026#34; # Command Injection curl -v \u0026#34;http://localhost/?id=1%27%20OR%20%271%27=%271\u0026#34; # SQL Injection Hopefully, ModSecurity supports JSON logs, so you can read them easily using a simple command or ship them with a log shipper such as Filebeat or Rsyslog to store them in your log server.\ntail -f /var/log/modsec_audit.log | jq . That\u0026rsquo;s it! You now have ModSecurity installed and running with Nginx on Debian/Ubuntu and you don\u0026rsquo;t need to spend hours compiling and configuring ModSecurity from scratch!\n","date":"6 March 2025","externalUrl":null,"permalink":"/posts/install-nginx-modsecurity-debian-ubuntu/","section":"Posts","summary":"","title":"Install Nginx with ModSecurity on Debian and Ubuntu","type":"posts"},{"content":"","externalUrl":null,"permalink":"/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"Senior DevOps \u0026amp; Platform Engineer — specializing in automation, cloud infrastructure, and distributed systems.\nGitHub · LinkedIn · Medium\nAbout # I’m a DevOps engineer with more than a decade of experience building and running cloud platforms and distributed systems.\nI enjoy turning messy infrastructure into something automated, reliable, and easy to operate — whether that’s setting up Kubernetes clusters, designing CI/CD pipelines, or building observability into platforms.\nOver the years I’ve worked across AWS, Hetzner, and hybrid environments, using tools like Terraform, Ansible, Docker, and Helm to keep infrastructure consistent and reproducible. My focus has always been on making systems resilient and helping teams move faster with confidence.\nWhat I Work On # Automation \u0026amp; Delivery — I build CI/CD and GitOps systems that keep deployments safe, repeatable, and fast. Tools like Terraform, Ansible, and Helm are at the core of how I manage infrastructure.\nData \u0026amp; Observability — I design telemetry pipelines with Prometheus, Elasticsearch/OpenSearch, and Kafka that make incidents easier to detect and resolve.\nCloud \u0026amp; Distributed Systems — I run Kubernetes, NGINX, and HAProxy at scale, focusing on secure, resilient, and performant platforms.\nSkills # Core Tools \u0026amp; Platforms\nKubernetes, AWS, vSphere, Docker, Terraform, Ansible, Helm, NGINX, HAProxy, Consul, Zookeeper\nDatastores\nPostgreSQL, MySQL, Cassandra, Redis, MongoDB, Kafka, RabbitMQ, ActiveMQ, NATS, Aerospike, Riak\nObservability \u0026amp; Security\nPrometheus, Grafana, ELK/EFK, OpenSearch, Kibana, Graylog, Wazuh, Ossec\nOther\nLinux, GitHub Actions, GitLab CI/CD, Jenkins, Apache, KVM, Gluster\nLanguages\nPython, Kotlin, Java, SQL, Lua, Bash\nLanguages # Persian — Native English — Professional Spanish — Basic Contact # Email: milux.zanganeh@gmail.com Location: Madrid, Spain GitHub: @milad-zanganeh LinkedIn: milad-zanganeh Medium: @milad-zanganeh ","externalUrl":null,"permalink":"/profile/","section":"Reliability at Scale","summary":"","title":"Profile","type":"page"},{"content":"","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"}]