initial commit

This commit is contained in:
Arnaud Delcasse 2024-11-04 01:22:29 +01:00
commit 23c38fe67b
44 changed files with 1307 additions and 0 deletions

3
.gitignore vendored Normal file
View File

@ -0,0 +1,3 @@
src/
build/
.env

77
README.md Normal file
View File

@ -0,0 +1,77 @@
# COOPGO Devspace environment
Local dev environment for the COOPGO Technical Platform, using [Tilt](https://tilt.dev) and a local Kubernetes cluster (such as [Rancher Desktop](https://rancherdesktop.io/), K3D, ...)
Tested with Rancher Desktop.
## Quickstart
1. Install Rancher Dektop or K3D or any other local Kubernetes. Make your Kubernetes context point to this cluster
2. [Install Tilt](https://docs.tilt.dev/install.html)
3. Create a .env file and configure environment
For example, to set the source code directories (see default values in Tiltfile) :
```
PARCOURSMOB_DIR="../../coopgo-apps/parcoursmob"
MOBILITY_ACCOUNTS_DIR="../../coopgo-platform/mobility-accounts"
AGENDA_DIR="../../coopgo-platform/agenda"
FLEETS_DIR="../../coopgo-platform/fleets"
GROUPS_MANAGEMENT_DIR="../../coopgo-platform/groups-management"
```
To avoid developing on another cluster than the one dedicated for dev, we whitelist the correct context. By default, the allowed context is 'rancher-desktop'. To change this, set ALLOWED_K8S_CONTEXT to the correct context name in .env :
```
ALLOWED_K8S_CONTEXT="k3d-devcluster"
```
You might also want to set the default container images used if/while your local source code folder doesn't exist.
```
PARCOURSMOB_IMAGE=my.own.registry/coopgo-platform/parcoursmob@v1.2
MOBILITY_ACCOUNTS_IMAGE=my.own.registry/coopgo-platform/mobility-accounts
...
```
4. Add `tilt-dev.svc.cluster.local parcoursmob.tilt-dev.svc.cluster.local mobility-accounts.tilt-dev.svc.cluster.local` to resolve `127.0.0.1` in your hosts config (`/etc/hosts` on Linux)
5. Run `tilt up`
6. After some time (you can see the services loading -or failing, if there is an error- through Tilt UI), PARCOURSMOB should be available on http://parcoursmob.tilt-dev.svc.cluster.local
To have this work, make sure port 80 is unprivileged if you're using Linux. This is described in Rancher Desktop documentation. If you're using something else to run Kubernetes, make sure you have the right port forwarding and an Ingress controller on your cluster to target port 80 on localhost.
Once you can acess PARCOURSMOB UI, you can connect with the initially created user :
- username: admin@parcoursmob.fr
- password: parcoursmob
Now, you're ready to code !
## What it does
Set up PARCOURSMOB along with the following dependencies :
- COOPGO microservices :
- Mobility Accounts
- Groups Management
- Fleets
- Agenda
- Carpool service (coming soon)
- Other software Dependencies :
- MongoDB
- Etcd for caching values
- Minio to store files
- SMRP4Dev to test outgoing emails
Tilt handles livereloading Kubernetes pods when something changes in the code. You just have to checkout your code in the directory you've specified in .env (see quickstart), start developing on exsting microservices, and see the changes.
## Acess useful internal components for debugging/tests
Access smtp4dev to check outgoing emails :
1. Forward smtp4dev service : kubectl port-forward svc/smtp4dev 9090:80
2. Access UI : http://localhost:9090/

76
Tiltfile Normal file
View File

@ -0,0 +1,76 @@
load('ext://dotenv', 'dotenv')
load('libs/go/Tiltfile', 'deploy_go')
dotenv()
allowed_k8s_context = os.environ.get('ALLOWED_K8S_CONTEXT', 'rancher-desktop')
# parcoursmob_dir = os.path.realpath(os.environ.get('PARCOURSMOB_DIR', 'src/parcoursmob'))
# parcoursmob_theme_dir = os.path.realpath(os.environ.get('PARCOURSMOB_THEME_DIR', 'src/parcoursmob/themes/default'))
# mobility_accounts_dir = os.path.realpath(os.environ.get('MOBILITY_ACCOUNTS_DIR', 'src/mobility-accounts'))
# agenda_dir = os.path.realpath(os.environ.get('AGENDA_DIR', 'src/agenda'))
# fleets_dir = os.path.realpath(os.environ.get('FLEETS_DIR', 'src/fleets'))
# groups_management_dir = os.path.realpath(os.environ.get('GROUPS_MANAGEMENT_DIR', 'src/groups-management'))
parcoursmob_dir = os.environ.get('PARCOURSMOB_DIR', 'src/parcoursmob')
parcoursmob_theme_dir = os.environ.get('PARCOURSMOB_THEME_DIR', 'src/parcoursmob/themes/default')
mobility_accounts_dir = os.environ.get('MOBILITY_ACCOUNTS_DIR', 'src/mobility-accounts')
agenda_dir = os.environ.get('AGENDA_DIR', 'src/agenda')
fleets_dir = os.environ.get('FLEETS_DIR', 'src/fleets')
groups_management_dir = os.environ.get('GROUPS_MANAGEMENT_DIR', 'src/groups-management')
parcoursmob_image = os.environ.get('PARCOURSMOB_IMAGE', 'git.coopgo.io/coopgo-apps/parcoursmob')
mobility_accounts_image = os.environ.get('MOBILITY_ACCOUNTS_IMAGE', 'git.coopgo.io/coopgo-platform/mobility-accounts')
agenda_image = os.environ.get('AGENDA_IMAGE', 'git.coopgo.io/coopgo-platform/agenda')
fleets_image = os.environ.get('FLEETS_IMAGE', 'git.coopgo.io/coopgo-platform/fleets')
groups_management_image = os.environ.get('GROUPS_MANAGEMENT_IMAGE', 'git.coopgo.io/coopgo-platform/groups-management')
allow_k8s_contexts(allowed_k8s_context)
if os.path.exists(parcoursmob_dir) and os.path.exists(parcoursmob_theme_dir):
parcoursmob_dir = os.path.realpath(parcoursmob_dir)
parcoursmob_theme_dir = os.path.realpath(parcoursmob_theme_dir)
print("PARCOURSMOB")
print("Directory : {dir}".format(dir=parcoursmob_dir))
print("Theme : {dir}".format(dir=parcoursmob_theme_dir))
deploy_go(
'parcoursmob',
parcoursmob_dir,
parcoursmob_image,
sync_folders=[("{src}".format(src=parcoursmob_theme_dir), 'themes/default')]
)
if os.path.exists(mobility_accounts_dir):
mobility_accounts_dir = os.path.realpath(mobility_accounts_dir)
print("MOBILITY ACCOUNTS")
print("Directory : {dir}".format(dir=mobility_accounts_dir))
deploy_go(
'mobility-accounts',
mobility_accounts_dir,
mobility_accounts_image,
sync_folders=[("{src}/oidc-provider/templates/".format(src=mobility_accounts_dir), 'oidc-provider/templates')]
)
if os.path.exists(agenda_dir):
agenda_dir = os.path.realpath(agenda_dir)
print("AGENDA")
print("Directory : {dir}".format(dir=agenda_dir))
deploy_go('agenda', agenda_dir, agenda_image)
if os.path.exists(fleets_dir):
fleets_dir = os.path.realpath(fleets_dir)
print("FLEETS")
print("Directory : {dir}".format(dir=fleets_dir))
deploy_go('fleets', fleets_dir, fleets_image)
if os.path.exists(groups_management_dir):
groups_management_dir =os.path.realpath(groups_management_dir)
print("GROUPS MANAGEMENT")
print("Directory : {dir}".format(dir=groups_management_dir))
deploy_go('groups-management', groups_management_dir, groups_management_image)
k8s_yaml(kustomize('k8s_config/dev'))

View File

@ -0,0 +1,16 @@
dev_env: true
storage:
db:
type: mongodb
mongodb:
host: mongodb
port: 27017
db_name: coopgo_platform
collections:
events: agenda_events
services:
grpc:
port: 8080

View File

@ -0,0 +1,31 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: agenda
spec:
selector:
matchLabels:
app: agenda
template:
metadata:
labels:
app: agenda
spec:
containers:
- name: agenda
image: git.coopgo.io/coopgo-platform/agenda
ports:
- name: grpc
containerPort: 8080
protocol: TCP
volumeMounts:
- name: agenda-config-volume
mountPath: /config.yaml
subPath: config.yaml
volumes:
- name: agenda-config-volume
configMap:
name: agenda-config
items:
- key: config.yaml
path: config.yaml

View File

@ -0,0 +1,8 @@
resources:
- deployment.yaml
- service.yaml
configMapGenerator:
- name: agenda-config
files:
- config.yaml

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: Service
metadata:
name: agenda
spec:
selector:
app: agenda
ports:
- port: 8080
name: grpc
targetPort: 8080

View File

@ -0,0 +1,309 @@
---
apiVersion: v1
kind: Service
metadata:
name: etcd
namespace: default
spec:
type: ClusterIP
clusterIP: None
selector:
app: etcd
##
## Ideally we would use SRV records to do peer discovery for initialization.
## Unfortunately discovery will not work without logic to wait for these to
## populate in the container. This problem is relatively easy to overcome by
## making changes to prevent the etcd process from starting until the records
## have populated. The documentation on statefulsets briefly talk about it.
## https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id
publishNotReadyAddresses: true
##
## The naming scheme of the client and server ports match the scheme that etcd
## uses when doing discovery with SRV records.
ports:
- name: etcd-client
port: 2379
- name: etcd-server
port: 2380
- name: etcd-metrics
port: 8080
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: default
name: etcd
spec:
##
## The service name is being set to leverage the service headlessly.
## https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
serviceName: etcd
##
## If you are increasing the replica count of an existing cluster, you should
## also update the --initial-cluster-state flag as noted further down in the
## container configuration.
replicas: 1
##
## For initialization, the etcd pods must be available to eachother before
## they are "ready" for traffic. The "Parallel" policy makes this possible.
podManagementPolicy: Parallel
##
## To ensure availability of the etcd cluster, the rolling update strategy
## is used. For availability, there must be at least 51% of the etcd nodes
## online at any given time.
updateStrategy:
type: RollingUpdate
##
## This is label query over pods that should match the replica count.
## It must match the pod template's labels. For more information, see the
## following documentation:
## https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
selector:
matchLabels:
app: etcd
##
## Pod configuration template.
template:
metadata:
##
## The labeling here is tied to the "matchLabels" of this StatefulSet and
## "affinity" configuration of the pod that will be created.
##
## This example's labeling scheme is fine for one etcd cluster per
## namespace, but should you desire multiple clusters per namespace, you
## will need to update the labeling schema to be unique per etcd cluster.
labels:
app: etcd
annotations:
##
## This gets referenced in the etcd container's configuration as part of
## the DNS name. It must match the service name created for the etcd
## cluster. The choice to place it in an annotation instead of the env
## settings is because there should only be 1 service per etcd cluster.
serviceName: etcd
spec:
##
## Configuring the node affinity is necessary to prevent etcd servers from
## ending up on the same hardware together.
##
## See the scheduling documentation for more information about this:
## https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
# affinity:
# ## The podAntiAffinity is a set of rules for scheduling that describe
# ## when NOT to place a pod from this StatefulSet on a node.
# podAntiAffinity:
# ##
# ## When preparing to place the pod on a node, the scheduler will check
# ## for other pods matching the rules described by the labelSelector
# ## separated by the chosen topology key.
# requiredDuringSchedulingIgnoredDuringExecution:
# ## This label selector is looking for app=etcd
# - labelSelector:
# matchExpressions:
# - key: app
# operator: In
# values:
# - etcd
# ## This topology key denotes a common label used on nodes in the
# ## cluster. The podAntiAffinity configuration essentially states
# ## that if another pod has a label of app=etcd on the node, the
# ## scheduler should not place another pod on the node.
# ## https://kubernetes.io/docs/reference/labels-annotations-taints/#kubernetesiohostname
# topologyKey: "kubernetes.io/hostname"
##
## Containers in the pod
containers:
## This example only has this etcd container.
- name: etcd
image: quay.io/coreos/etcd:v3.5.15
imagePullPolicy: IfNotPresent
ports:
- name: etcd-client
containerPort: 2379
- name: etcd-server
containerPort: 2380
- name: etcd-metrics
containerPort: 8080
##
## These probes will fail over TLS for self-signed certificates, so etcd
## is configured to deliver metrics over port 8080 further down.
##
## As mentioned in the "Monitoring etcd" page, /readyz and /livez were
## added in v3.5.12. Prior to this, monitoring required extra tooling
## inside the container to make these probes work.
##
## The values in this readiness probe should be further validated, it
## is only an example configuration.
readinessProbe:
httpGet:
path: /readyz
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 30
## The values in this liveness probe should be further validated, it
## is only an example configuration.
livenessProbe:
httpGet:
path: /livez
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
env:
##
## Environment variables defined here can be used by other parts of the
## container configuration. They are interpreted by Kubernetes, instead
## of in the container environment.
##
## These env vars pass along information about the pod.
- name: K8S_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: SERVICE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.annotations['serviceName']
##
## Configuring etcdctl inside the container to connect to the etcd node
## in the container reduces confusion when debugging.
- name: ETCDCTL_ENDPOINTS
value: $(HOSTNAME).$(SERVICE_NAME):2379
##
## TLS client configuration for etcdctl in the container.
## These files paths are part of the "etcd-client-certs" volume mount.
# - name: ETCDCTL_KEY
# value: /etc/etcd/certs/client/tls.key
# - name: ETCDCTL_CERT
# value: /etc/etcd/certs/client/tls.crt
# - name: ETCDCTL_CACERT
# value: /etc/etcd/certs/client/ca.crt
##
## Use this URI_SCHEME value for non-TLS clusters.
- name: URI_SCHEME
value: "http"
## TLS: Use this URI_SCHEME for TLS clusters.
# - name: URI_SCHEME
# value: "https"
##
## If you're using a different container, the executable may be in a
## different location. This example uses the full path to help remove
## ambiguity to you, the reader.
## Often you can just use "etcd" instead of "/usr/local/bin/etcd" and it
## will work because the $PATH includes a directory containing "etcd".
command:
- /usr/local/bin/etcd
##
## Arguments used with the etcd command inside the container.
args:
##
## Configure the name of the etcd server.
- --name=$(HOSTNAME)
##
## Configure etcd to use the persistent storage configured below.
- --data-dir=/data
##
## In this example we're consolidating the WAL into sharing space with
## the data directory. This is not ideal in production environments and
## should be placed in it's own volume.
- --wal-dir=/data/wal
##
## URL configurations are parameterized here and you shouldn't need to
## do anything with these.
- --listen-peer-urls=$(URI_SCHEME)://0.0.0.0:2380
- --listen-client-urls=$(URI_SCHEME)://0.0.0.0:2379
- --advertise-client-urls=$(URI_SCHEME)://$(HOSTNAME).$(SERVICE_NAME):2379
##
## This must be set to "new" for initial cluster bootstrapping. To scale
## the cluster up, this should be changed to "existing" when the replica
## count is increased. If set incorrectly, etcd makes an attempt to
## start but fail safely.
- --initial-cluster-state=new
##
## Token used for cluster initialization. The recommendation for this is
## to use a unique token for every cluster. This example parameterized
## to be unique to the namespace, but if you are deploying multiple etcd
## clusters in the same namespace, you should do something extra to
## ensure uniqueness amongst clusters.
- --initial-cluster-token=etcd-$(K8S_NAMESPACE)
##
## The initial cluster flag needs to be updated to match the number of
## replicas configured. When combined, these are a little hard to read.
## Here is what a single parameterized peer looks like:
- --initial-cluster=etcd-0=$(URI_SCHEME)://etcd-0.$(SERVICE_NAME):2380
## - --initial-cluster=etcd-0=$(URI_SCHEME)://etcd-0.$(SERVICE_NAME):2380,etcd-1=$(URI_SCHEME)://etcd-1.$(SERVICE_NAME):2380,etcd-2=$(URI_SCHEME)://etcd-2.$(SERVICE_NAME):2380
##
## The peer urls flag should be fine as-is.
- --initial-advertise-peer-urls=$(URI_SCHEME)://$(HOSTNAME).$(SERVICE_NAME):2380
##
## This avoids probe failure if you opt to configure TLS.
- --listen-metrics-urls=http://0.0.0.0:8080
##
## These are some configurations you may want to consider enabling, but
## should look into further to identify what settings are best for you.
# - --auto-compaction-mode=periodic
# - --auto-compaction-retention=10m
##
## TLS client configuration for etcd, reusing the etcdctl env vars.
# - --client-cert-auth
# - --trusted-ca-file=$(ETCDCTL_CACERT)
# - --cert-file=$(ETCDCTL_CERT)
# - --key-file=$(ETCDCTL_KEY)
##
## TLS server configuration for etcdctl in the container.
## These files paths are part of the "etcd-server-certs" volume mount.
# - --peer-client-cert-auth
# - --peer-trusted-ca-file=/etc/etcd/certs/server/ca.crt
# - --peer-cert-file=/etc/etcd/certs/server/tls.crt
# - --peer-key-file=/etc/etcd/certs/server/tls.key
##
## This is the mount configuration.
volumeMounts:
- name: etcd-data
mountPath: /data
##
## TLS client configuration for etcdctl
# - name: etcd-client-tls
# mountPath: "/etc/etcd/certs/client"
# readOnly: true
##
## TLS server configuration
# - name: etcd-server-tls
# mountPath: "/etc/etcd/certs/server"
# readOnly: true
volumes:
##
## TLS client configuration
# - name: etcd-client-tls
# secret:
# secretName: etcd-client-tls
# optional: false
##
## TLS server configuration
# - name: etcd-server-tls
# secret:
# secretName: etcd-server-tls
# optional: false
##
## This StatefulSet will uses the volumeClaimTemplate field to create a PVC in
## the cluster for each replica. These PVCs can not be easily resized later.
volumeClaimTemplates:
- metadata:
name: etcd-data
spec:
accessModes: ["ReadWriteOnce"]
##
## In some clusters, it is necessary to explicitly set the storage class.
## This example will end up using the default storage class.
# storageClassName: ""
resources:
requests:
storage: 1Gi

View File

@ -0,0 +1,5 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- etcd-statefulset.yaml

View File

@ -0,0 +1,17 @@
dev_env: true
storage:
db:
type: mongodb
mongodb:
host: mongodb
port: 27017
db_name: coopgo_platform
collections:
vehicles: fleet_vehicles
bookings: fleet_bookings
services:
grpc:
port: 8080

View File

@ -0,0 +1,31 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: fleets
spec:
selector:
matchLabels:
app: fleets
template:
metadata:
labels:
app: fleets
spec:
containers:
- name: fleets
image: git.coopgo.io/coopgo-platform/fleets
ports:
- name: grpc
containerPort: 8080
protocol: TCP
volumeMounts:
- name: fleets-config-volume
mountPath: /config.yaml
subPath: config.yaml
volumes:
- name: fleets-config-volume
configMap:
name: fleets-config
items:
- key: config.yaml
path: config.yaml

View File

@ -0,0 +1,8 @@
resources:
- deployment.yaml
- service.yaml
configMapGenerator:
- name: fleets-config
files:
- config.yaml

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: Service
metadata:
name: fleets
spec:
selector:
app: fleets
ports:
- port: 8080
name: grpc
targetPort: 8080

View File

@ -0,0 +1,17 @@
dev_env: true
storage:
db:
type: mongodb
mongodb:
host: mongodb
port: 27017
db_name: coopgo_platform
collections:
groups: groups
services:
grpc:
port: 8080

View File

@ -0,0 +1,31 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: groups-management
spec:
selector:
matchLabels:
app: groups-management
template:
metadata:
labels:
app: groups-management
spec:
containers:
- name: groups-management
image: git.coopgo.io/coopgo-platform/groups-management
ports:
- name: grpc
containerPort: 8080
protocol: TCP
volumeMounts:
- name: groups-management-config-volume
mountPath: /config.yaml
subPath: config.yaml
volumes:
- name: groups-management-config-volume
configMap:
name: groups-management-config
items:
- key: config.yaml
path: config.yaml

View File

@ -0,0 +1,8 @@
resources:
- deployment.yaml
- service.yaml
configMapGenerator:
- name: groups-management-config
files:
- config.yaml

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: Service
metadata:
name: groups-management
spec:
selector:
app: groups-management
ports:
- port: 8080
name: grpc
targetPort: 8080

View File

@ -0,0 +1,18 @@
namespace: tilt-dev
resources:
# Microservices
- ./agenda/
- ./fleets/
- ./groups-management/
- ./mobility-accounts/
# Apps
- ./parcoursmob/
# Dependencies
- ./etcd/
- ./mongodb/
- ./minio/
- ./smtp4dev/
- namespace.yaml

View File

@ -0,0 +1,39 @@
apiVersion: apps/v1 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
kind: Deployment
metadata:
# This name uniquely identifies the Deployment
name: minio
spec:
selector:
matchLabels:
app: minio
strategy:
type: Recreate
template:
metadata:
labels:
# Label is used as selector in the service.
app: minio
spec:
volumes:
- name: storage
persistentVolumeClaim:
claimName: minio
containers:
- name: minio
image: minio/minio:latest
command: ["sh"]
args:
- "-c"
- "mkdir -p /storage/parcoursmob && minio server /storage"
env:
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
hostPort: 9000
volumeMounts:
- name: storage
mountPath: "/storage"

View File

@ -0,0 +1,4 @@
resources:
- deployment.yaml
- pvc.yaml
- service.yaml

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# This name uniquely identifies the PVC. Will be used in deployment below.
name: minio
labels:
app: minio
spec:
# Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
accessModes:
- ReadWriteOnce
resources:
# This is the request for storage. Should be available in the cluster.
requests:
storage: 10Gi

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: Service
metadata:
name: minio
spec:
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio

View File

@ -0,0 +1,56 @@
dev_env: true
storage:
db:
type: mongodb
mongodb:
host: mongodb
port: 27017
db_name: coopgo_platform
kv:
type: etcd
etcd:
endpoints:
- etcd:2379
prefix: mobilityaccounts/
services:
grpc:
enable: true
port: 8080
oidc_provider:
enable: true
csrf_key: 32-byte-long-auth-key-----------
port: 80
namespaces:
parcoursmob:
namespace: parcoursmob
secret_key: some-cool-secret-that-is-32bytes
templates_dir: /oidc-provider/templates/parcoursmob
match_claims:
groups: groups
first_name: first_name
last_name: last_name
display_name: display_name
clients:
- id: abcdef
oidc: true
secret: $2y$10$0GwBc17u9HjZnVlFjhIv2u.Jq..5NyzdUENpGa8jUtok2zHj/STmm
token_endpoint_auth_method: client_secret_post
response_types:
- code
grant_types:
- authorization_code
redirect_uris:
- http://parcoursmob.tilt-dev.svc.cluster.local/api/oauth2/callback
scopes:
- openid
- profile
- groups
- email
- first_name
- last_name
- display_name

View File

@ -0,0 +1,34 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: mobility-accounts
spec:
selector:
matchLabels:
app: mobility-accounts
template:
metadata:
labels:
app: mobility-accounts
spec:
containers:
- name: mobility-accounts
image: git.coopgo.io/coopgo-platform/mobility-accounts
ports:
- name: grpc
containerPort: 8080
protocol: TCP
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- name: mobility-accounts-config-volume
mountPath: /config.yaml
subPath: config.yaml
volumes:
- name: mobility-accounts-config-volume
configMap:
name: mobility-accounts-config
items:
- key: config.yaml
path: config.yaml

View File

@ -0,0 +1,18 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mobility-accounts
labels:
name: mobility-accounts
spec:
rules:
- host: mobility-accounts.tilt-dev.svc.cluster.local
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: mobility-accounts
port:
number: 80

View File

@ -0,0 +1,9 @@
resources:
- deployment.yaml
- service.yaml
- ingress.yaml
configMapGenerator:
- name: mobility-accounts-config
files:
- config.yaml

View File

@ -0,0 +1,23 @@
apiVersion: v1
kind: Service
metadata:
name: mobility-accounts
spec:
selector:
app: mobility-accounts
ports:
- port: 8080
name: grpc
targetPort: 8080
- port: 80
name: http
targetPort: 80
# ---
# apiVersion: v1
# kind: Service
# metadata:
# name: auth
# spec:
# type: ExternalName
# externalName: mobility-accounts

View File

@ -0,0 +1,57 @@
db.createCollection("users")
db.createCollection("groups")
db.users.insert({
"_id" : "979888c6-656f-4af7-b7a4-97a02a3cc13d",
"namespace" : "parcoursmob",
"authentication" : {
"local" : {
"username" : "admin@parcoursmob.fr",
"password" : "$2a$10$XVq9C9az1O0w.kCzqcMZtOAMxur1ODRT5vF9/GmILyHc1Xjalzymu",
"email" : "",
"email_validation" : {
"validated" : false,
"validation_code" : ""
},
"phone_number" : "",
"phone_number_validation" : {
"validated" : false,
"validation_code" : ""
}
}
},
"data" : {
"first_name" : "Admin",
"last_name" : "Parcoursmob",
"phone_number" : "+33600000000",
"email" : "noreply@parcoursmob.fr",
"groups" : [
"483280d0-db2d-4f06-b361-02e4be5012d2",
"483280d0-db2d-4f06-b361-02e4be5012d2:admin",
],
"display_name" : "Admin Parcoursmob"
},
"metadata" : {
"created" : ISODate("2022-08-09T10:14:51.879+0000")
}
})
db.groups.insert({
"_id" : "483280d0-db2d-4f06-b361-02e4be5012d2",
"namespace" : "parcoursmob_organizations",
"members" : [],
"data" : {
"modules" : {
"groups_management" : true,
"vehicles_management" : true,
"directory" : true,
"administration" : true,
"agenda" : true,
"vehicles" : true,
"support" : true,
"beneficiaries" : true,
"journeys" : true
},
"name" : "COOPGO"
}
})

View File

@ -0,0 +1,10 @@
resources:
- pcv.yaml
- service.yaml
- statefulset.yaml
configMapGenerator:
- name: create-db-configmap
files:
- create_db.js

View File

@ -0,0 +1,10 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: Service
metadata:
name: mongodb
spec:
clusterIP: None
selector:
app: mongodb
ports:
- port: 27017
targetPort: 27017

View File

@ -0,0 +1,40 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb
spec:
serviceName: mongodb
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:latest
ports:
- containerPort: 27017
volumeMounts:
- name: mongodb-data
mountPath: /data/db
- name: "init-database"
mountPath: "/docker-entrypoint-initdb.d/"
env:
- name: MONGO_INITDB_DATABASE
value: coopgo_platform
volumes:
- name: "init-database"
configMap:
name: create-db-configmap
volumeClaimTemplates:
- metadata:
name: mongodb-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi

View File

@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: tilt-dev

View File

@ -0,0 +1,100 @@
dev_env: true
server:
listen: 0.0.0.0:80
templates:
root: /themes/default/
public_dir: /themes/default/web/public/
identification:
oidc:
provider: http://mobility-accounts.tilt-dev.svc.cluster.local/parcoursmob
provider_config:
issuer_url: http://mobility-accounts.tilt-dev.svc.cluster.local/parcoursmob
auth_url: http://mobility-accounts.tilt-dev.svc.cluster.local/parcoursmob/auth
token_url: http://mobility-accounts.tilt-dev.svc.cluster.local/parcoursmob/token
user_info_url: http://mobility-accounts.tilt-dev.svc.cluster.local/parcoursmob/user_info
jwks_url: http://mobility-accounts.tilt-dev.svc.cluster.local/parcoursmob/.well_known/jwks.json
client_id: abcdef
client_secret: ghijkl
redirect_url: http://parcoursmob.tilt-dev.svc.cluster.local/api/oauth2/callback
services:
grpc:
mobilityaccounts:
dial: mobility-accounts:8080
groupsmanagement:
dial: groups-management:8080
fleets:
dial: fleets:8080
agenda:
dial: agenda:8080
navitia:
api_key: abcd
storage:
kv:
type: etcd
etcd:
endpoints:
- etcd:2379
prefix: parcoursmob/cache/
files:
type: minio
minio:
endpoint: minio:9000
access_key: minio
secret_key: minio123
use_ssl: false
bucket_name: parcoursmob
location: fr-par
file_types:
driving_licence: Permis de conduire
work_contract: Contrat de travail ou de formation
identity_proof: Pièce d'identité
other: Autre
emailing:
smtp:
host: smtp4dev
port: 25
username: noreply@parcoursmob.fr
password: 12345678
modules:
beneficiaries:
enable: true
documents_types:
- identity_proof
- driving_licence
- work_contract
- other
fleets:
enabled: true
vehicle_types:
- Voiture
- Voiture sans permis
- Scooter
- Vélo électrique
- Trottinette
booking_documents:
mandatory:
- driving_licence
- identity_proof
- work_contract
agenda:
enabled: true
events_types:
- name: Accompagnements individuels
- name: Permis accéléré
subscription_tags:
- Non présenté à l'information collective
- Parcours validé
- Abandon pendant le parcours
- Succès à l'examen
- Echec à l'examen
- name: Auto-école sociale (classique)
- name: Information collective
- name: Autre

View File

@ -0,0 +1,29 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: parcoursmob
spec:
selector:
matchLabels:
app: parcoursmob
template:
metadata:
labels:
app: parcoursmob
spec:
containers:
- name: parcoursmob
image: git.coopgo.io/coopgo-apps/parcoursmob
ports:
- containerPort: 80
volumeMounts:
- name: parcoursmob-config-volume
mountPath: /config.yaml
subPath: config.yaml
volumes:
- name: parcoursmob-config-volume
configMap:
name: parcoursmob-config
items:
- key: config.yaml
path: config.yaml

View File

@ -0,0 +1,18 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: parcoursmob
labels:
name: parcoursmob
spec:
rules:
- host: parcoursmob.tilt-dev.svc.cluster.local
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: parcoursmob
port:
number: 80

View File

@ -0,0 +1,9 @@
resources:
- deployment.yaml
- service.yaml
- ingress.yaml
configMapGenerator:
- name: parcoursmob-config
files:
- config.yaml

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: Service
metadata:
name: parcoursmob
spec:
selector:
app: parcoursmob
ports:
- port: 80
name: http
targetPort: 80

View File

@ -0,0 +1,34 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: smtp4dev
spec:
selector:
matchLabels:
app: smtp4dev
template:
metadata:
labels:
app: smtp4dev
spec:
containers:
- name: smtp4dev
image: rnwood/smtp4dev:v3
ports:
- name: http
containerPort: 80
- name: smtp
containerPort: 25
- name: imap
containerPort: 143
volumeMounts:
- name: smtp4dev
mountPath: "/smtp4dev"
env:
- name: ServerOptions__Urls
value: http://*:80
volumes:
- name: smtp4dev
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: smtp4dev

View File

@ -0,0 +1,4 @@
resources:
- deployment.yaml
- pvc.yaml
- service.yaml

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: smtp4dev
labels:
app: smtp4dev
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

View File

@ -0,0 +1,17 @@
apiVersion: v1
kind: Service
metadata:
name: smtp4dev
spec:
selector:
app: smtp4dev
ports:
- name: http
port: 80
targetPort: 80
- name: smtp
port: 25
targetPort: 25
- name: imap
port: 143
targetPort: 143

4
libs/go/Dockerfile Normal file
View File

@ -0,0 +1,4 @@
FROM alpine
WORKDIR /app
ADD build build
ENTRYPOINT build/server

40
libs/go/Tiltfile Normal file
View File

@ -0,0 +1,40 @@
load('ext://restart_process', 'docker_build_with_restart')
def deploy_go(name, src, image, sync_folders = []):
compile_cmd = "CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -C {src} -o {currentPath}/build/{name}/server .".format(src=src, name= name, currentPath=os.getcwd())
local_resource(
"{name}-compile".format(name=name),
compile_cmd,
ignore="build/".format(src=src),
deps=[src])
i = 0
resource_deps=["{name}-compile".format(name=name)]
for f in sync_folders:
new_resource_dep = "{name}-static-{i}".format(name=name, i = i)
local_resource(
new_resource_dep,
"mkdir -p build/{name}/{targetPath} && rsync -av {localPath}/ build/{name}/{targetPath}/".format(localPath=f[0], targetPath=f[1], name=name),
deps=[f[0]]
)
resource_deps.append(new_resource_dep)
i = i+1
dockerfile_contents = """FROM alpine
WORKDIR /
RUN apk add tzdata
ADD build/{name} .
ENTRYPOINT /server""".format(name = name)
docker_build_with_restart(
image,
'.',
entrypoint=['/server'],
dockerfile_contents=dockerfile_contents,
only=["build/{name}/".format(name = name)],
live_update=[sync("build/{name}".format(name=name), '/')],
)
k8s_resource(name, resource_deps=resource_deps)