update
This commit is contained in:
parent
3e97a7a4bf
commit
06d2fd424c
6 changed files with 1108 additions and 453 deletions
2
.orgids
2
.orgids
File diff suppressed because one or more lines are too long
|
@ -9455,3 +9455,121 @@ DEADLINE: <2023-02-09 Thu 11:00>
|
|||
CLOCK: [2023-01-11 Wed 16:38]--[2023-01-11 Wed 20:38] => 4:00
|
||||
:END:
|
||||
[2023-01-11 Wed 16:37]
|
||||
|
||||
* DONE Ajouter témoignage CE&H
|
||||
DEADLINE: <2023-02-27 Mon 18:00>
|
||||
:PROPERTIES:
|
||||
:ARCHIVE_TIME: 2023-02-28 Tue 22:57
|
||||
:ARCHIVE_FILE: ~/Library/Mobile Documents/iCloud~com~appsonthemove~beorg/Documents/org/inbox.org
|
||||
:ARCHIVE_OLPATH: Inbox
|
||||
:ARCHIVE_CATEGORY: inbox
|
||||
:ARCHIVE_TODO: DONE
|
||||
:END:
|
||||
[2023-02-27 Mon 10:45]
|
||||
|
||||
Leïka m’a sauvé la vie.
|
||||
Elle a réussie a m’accompagner à un moment où personne ne pouvait.
|
||||
Mais ce n’est pas juste mon chien d’assistance.
|
||||
C’est ma partenaire de vie.
|
||||
Elle est toujours là pour m’aider, et j’ai reconstruit ma vie autour d’elle.
|
||||
On ne se quitte jamais, et si je suis là c’est sûrement que Leïka est là aussi.
|
||||
|
||||
|
||||
|
||||
|
||||
* DONE Envoyer mail au notaire (update situation)
|
||||
DEADLINE: <2023-02-27 Mon 11:00>
|
||||
:PROPERTIES:
|
||||
:ARCHIVE_TIME: 2023-02-28 Tue 22:57
|
||||
:ARCHIVE_FILE: ~/Library/Mobile Documents/iCloud~com~appsonthemove~beorg/Documents/org/inbox.org
|
||||
:ARCHIVE_OLPATH: Inbox
|
||||
:ARCHIVE_CATEGORY: inbox
|
||||
:ARCHIVE_TODO: DONE
|
||||
:END:
|
||||
[2023-02-27 Mon 10:40]
|
||||
|
||||
* DONE Appeler Géraldine pour garder les vélos.
|
||||
DEADLINE: <2023-02-27 Mon 14:00>
|
||||
:PROPERTIES:
|
||||
:ARCHIVE_TIME: 2023-02-28 Tue 22:57
|
||||
:ARCHIVE_FILE: ~/Library/Mobile Documents/iCloud~com~appsonthemove~beorg/Documents/org/inbox.org
|
||||
:ARCHIVE_OLPATH: Inbox
|
||||
:ARCHIVE_CATEGORY: inbox
|
||||
:ARCHIVE_TODO: DONE
|
||||
:END:
|
||||
[2023-02-27 Mon 10:40]
|
||||
|
||||
* DONE Poser les plaques des chiens
|
||||
SCHEDULED: <2023-02-24 Fri 10:00>
|
||||
:PROPERTIES:
|
||||
:ARCHIVE_TIME: 2023-02-28 Tue 22:57
|
||||
:ARCHIVE_FILE: ~/Library/Mobile Documents/iCloud~com~appsonthemove~beorg/Documents/org/inbox.org
|
||||
:ARCHIVE_OLPATH: Inbox
|
||||
:ARCHIVE_CATEGORY: inbox
|
||||
:ARCHIVE_TODO: DONE
|
||||
:END:
|
||||
[2023-02-23 Thu 19:49]
|
||||
|
||||
* DONE Sync with Yuri about Secure Endpoint error logs org-level-authorization
|
||||
DEADLINE: <2023-02-27 Mon 15:00>
|
||||
:PROPERTIES:
|
||||
:ARCHIVE_TIME: 2023-02-28 Tue 22:57
|
||||
:ARCHIVE_FILE: ~/Library/Mobile Documents/iCloud~com~appsonthemove~beorg/Documents/org/inbox.org
|
||||
:ARCHIVE_OLPATH: Inbox
|
||||
:ARCHIVE_CATEGORY: inbox
|
||||
:ARCHIVE_TODO: DONE
|
||||
:END:
|
||||
[2023-02-23 Thu 19:02]
|
||||
|
||||
A work should be done to upgrade the clients to "org-level-authorization".
|
||||
Matt teams should be working on it. With the current state of affair, we might
|
||||
be able to plan it for Q4 but not before due to RSA.
|
||||
So for now, we should stick with non org-level authorization until this work is completed.
|
||||
|
||||
The details is, that the proxy of the module will check the JWT received, and
|
||||
the client-id is trusted (typically DI client) and is configured with the
|
||||
org-level-authorization then, we ignore the setting of the Secure Endpoint
|
||||
module to "Act as the User".
|
||||
|
||||
|
||||
* DONE Appeler Bastien pour le velo et la mutuelle
|
||||
DEADLINE: <2023-02-23 Thu 18:15>
|
||||
:PROPERTIES:
|
||||
:ARCHIVE_TIME: 2023-02-28 Tue 22:57
|
||||
:ARCHIVE_FILE: ~/Library/Mobile Documents/iCloud~com~appsonthemove~beorg/Documents/org/inbox.org
|
||||
:ARCHIVE_OLPATH: Inbox
|
||||
:ARCHIVE_CATEGORY: inbox
|
||||
:ARCHIVE_TODO: DONE
|
||||
:END:
|
||||
[2023-02-23 Thu 17:49]
|
||||
|
||||
* DONE Créer l'attestation pour Gaya.
|
||||
DEADLINE: <2023-02-23 Thu 18:30>
|
||||
:PROPERTIES:
|
||||
:ARCHIVE_TIME: 2023-02-28 Tue 22:57
|
||||
:ARCHIVE_FILE: ~/Library/Mobile Documents/iCloud~com~appsonthemove~beorg/Documents/org/inbox.org
|
||||
:ARCHIVE_OLPATH: Inbox
|
||||
:ARCHIVE_CATEGORY: inbox
|
||||
:ARCHIVE_TODO: DONE
|
||||
:END:
|
||||
:LOGBOOK:
|
||||
- State "DONE" from "HOLD" [2023-02-23 Thu 19:49]
|
||||
- State "HOLD" from "TODO" [2023-02-23 Thu 19:49] \\
|
||||
Krystelle s'en occupe
|
||||
:END:
|
||||
[2023-02-23 Thu 17:48]
|
||||
|
||||
* CANCELED couper l'électricité Valbonne
|
||||
DEADLINE: <2023-03-06 Mon>
|
||||
:PROPERTIES:
|
||||
:ARCHIVE_TIME: 2023-02-28 Tue 22:57
|
||||
:ARCHIVE_FILE: ~/Library/Mobile Documents/iCloud~com~appsonthemove~beorg/Documents/org/inbox.org
|
||||
:ARCHIVE_OLPATH: Inbox
|
||||
:ARCHIVE_CATEGORY: inbox
|
||||
:ARCHIVE_TODO: CANCELED
|
||||
:END:
|
||||
:LOGBOOK:
|
||||
- State "CANCELED" from "TODO" [2023-02-27 Mon 10:41] \\
|
||||
Les nouveaux propriétaires vont déplacer les contrats.
|
||||
:END:
|
||||
[2023-01-31 Tue 12:04]
|
||||
|
|
259
inbox.org
259
inbox.org
|
@ -10,85 +10,69 @@
|
|||
SPC y o c => DISPLAY org columns
|
||||
#+end_comment
|
||||
* Inbox
|
||||
** TODO [#B] Payer le loyer
|
||||
** DONE [#B] Payer le loyer
|
||||
DEADLINE: <2023-03-31 Fri 16:00>
|
||||
[2023-03-31 Fri 14:08]
|
||||
** DONE Récupérer tous les documents pour le courtier
|
||||
CLOSED: [2023-04-06 Thu 07:26] DEADLINE: <2023-04-05 Wed 16:00>
|
||||
:LOGBOOK:
|
||||
- State "DONE" from "TODO" [2023-04-06 Thu 07:26]
|
||||
:END:
|
||||
[2023-03-31 Fri 14:06]
|
||||
** DONE Appeler Orange
|
||||
DEADLINE: <2023-03-20 Mon 11:45>
|
||||
[2023-03-20 Mon 11:44]
|
||||
** DONE Envoyer demande de remboursement
|
||||
DEADLINE: <2023-03-20 Mon 12:00>
|
||||
[2023-03-20 Mon 11:41]
|
||||
** DONE Envoyer justificatif de domicile
|
||||
DEADLINE: <2023-03-20 Mon 12:00>
|
||||
[2023-03-20 Mon 11:40]
|
||||
** DONE Acheter croquettes chats
|
||||
DEADLINE: <2023-03-20 Mon 16:00>
|
||||
[2023-03-20 Mon 10:01]
|
||||
** DONE Passer grain fin sur la table
|
||||
DEADLINE: <2023-03-20 Mon 15:00>
|
||||
[2023-03-20 Mon 09:58]
|
||||
** DONE Appeler le banquier, envoyer les documents
|
||||
DEADLINE: <2023-03-20 Mon 10:30>
|
||||
[2023-03-20 Mon 09:57]
|
||||
** DONE étendre le linge
|
||||
DEADLINE: <2023-03-20 Mon 11:00>
|
||||
[2023-03-20 Mon 09:56]
|
||||
** TODO Payer le peintre
|
||||
DEADLINE: <2023-04-06 Thu 15:00> SCHEDULED: <2023-03-30 Thu>
|
||||
|
||||
[2023-03-16 Thu 17:03]
|
||||
** DONE Publish composable nix-shell
|
||||
SCHEDULED: <2023-03-06 Mon 15:00>
|
||||
[2023-03-01 Wed 10:15]
|
||||
** DONE [#B] Payer le loyer
|
||||
DEADLINE: <2023-02-28 Tue 17:00>
|
||||
[2023-02-27 Mon 10:54]
|
||||
** DONE Ajouter témoignage CE&H
|
||||
DEADLINE: <2023-02-27 Mon 18:00>
|
||||
[2023-02-27 Mon 10:45]
|
||||
|
||||
Leïka m’a sauvé la vie.
|
||||
Elle a réussie a m’accompagner à un moment où personne ne pouvait.
|
||||
Mais ce n’est pas juste mon chien d’assistance.
|
||||
C’est ma partenaire de vie.
|
||||
Elle est toujours là pour m’aider, et j’ai reconstruit ma vie autour d’elle.
|
||||
On ne se quitte jamais, et si je suis là c’est sûrement que Leïka est là aussi.
|
||||
|
||||
|
||||
|
||||
** DONE Envoyer mail au notaire (update situation)
|
||||
DEADLINE: <2023-02-27 Mon 11:00>
|
||||
[2023-02-27 Mon 10:40]
|
||||
** DONE Appeler Géraldine pour garder les vélos.
|
||||
DEADLINE: <2023-02-27 Mon 14:00>
|
||||
[2023-02-27 Mon 10:40]
|
||||
** TODO Appeler l'assurance pour les cartes des voitures
|
||||
** DONE Appeler l'assurance pour les cartes des voitures
|
||||
DEADLINE: <2023-02-24 Fri 10:30>
|
||||
[2023-02-23 Thu 19:49]
|
||||
** DONE Poser les plaques des chiens
|
||||
SCHEDULED: <2023-02-24 Fri 10:00>
|
||||
[2023-02-23 Thu 19:49]
|
||||
** DONE Sync with Yuri about Secure Endpoint error logs org-level-authorization
|
||||
DEADLINE: <2023-02-27 Mon 15:00>
|
||||
[2023-02-23 Thu 19:02]
|
||||
|
||||
A work should be done to upgrade the clients to "org-level-authorization".
|
||||
Matt teams should be working on it. With the current state of affair, we might
|
||||
be able to plan it for Q4 but not before due to RSA.
|
||||
So for now, we should stick with non org-level authorization until this work is completed.
|
||||
|
||||
The details is, that the proxy of the module will check the JWT received, and
|
||||
the client-id is trusted (typically DI client) and is configured with the
|
||||
org-level-authorization then, we ignore the setting of the Secure Endpoint
|
||||
module to "Act as the User".
|
||||
|
||||
** DONE Appeler Bastien pour le velo et la mutuelle
|
||||
DEADLINE: <2023-02-23 Thu 18:15>
|
||||
[2023-02-23 Thu 17:49]
|
||||
** DONE Créer l'attestation pour Gaya.
|
||||
DEADLINE: <2023-02-23 Thu 18:30>
|
||||
:LOGBOOK:
|
||||
- State "DONE" from "HOLD" [2023-02-23 Thu 19:49]
|
||||
- State "HOLD" from "TODO" [2023-02-23 Thu 19:49] \\
|
||||
Krystelle s'en occupe
|
||||
:END:
|
||||
[2023-02-23 Thu 17:48]
|
||||
** TODO Appeler Bastien pour samedi
|
||||
[2023-02-17 Fri 08:56]
|
||||
** TODO Supprimer Assurance Habitation Valbonne
|
||||
DEADLINE: <2023-03-01 Wed>
|
||||
** DONE Supprimer Assurance Habitation Valbonne
|
||||
DEADLINE: <2023-03-22 Wed 16:00> SCHEDULED: <2023-03-20 Mon 11:45>
|
||||
[2023-01-31 Tue 12:05]
|
||||
** CANCELED couper l'électricité Valbonne
|
||||
DEADLINE: <2023-03-06 Mon>
|
||||
:LOGBOOK:
|
||||
- State "CANCELED" from "TODO" [2023-02-27 Mon 10:41] \\
|
||||
Les nouveaux propriétaires vont déplacer les contrats.
|
||||
:END:
|
||||
[2023-01-31 Tue 12:04]
|
||||
** TODO Regarder sans soleil https://www.youtube.com/watch?v=fdusEgrbhgA
|
||||
SCHEDULED: <2023-03-12 Sun 21:00>
|
||||
SCHEDULED: <2023-04-07 Fri 21:00>
|
||||
[2022-11-26 Sat 11:04]
|
||||
** TODO DL The good place
|
||||
** DONE DL The good place
|
||||
SCHEDULED: <2023-03-01 Wed>
|
||||
* Perso :perso:
|
||||
** Habits :habit:
|
||||
*** TODO Reading List notes
|
||||
SCHEDULED: <2023-02-22 Wed 09:00 .+1d>
|
||||
SCHEDULED: <2023-03-21 Tue 09:00 .+1d>
|
||||
:PROPERTIES:
|
||||
:STYLE: habit
|
||||
:LAST_REPEAT: [2023-02-21 Tue 14:22]
|
||||
:LAST_REPEAT: [2023-03-20 Mon 10:00]
|
||||
:END:
|
||||
:LOGBOOK:
|
||||
- State "CANCELED" from "TODO" [2023-03-20 Mon 10:00]
|
||||
- State "CANCELED" from "TODO" [2023-02-21 Tue 14:22]
|
||||
- State "CANCELED" from "TODO" [2023-02-17 Fri 08:57] \\
|
||||
Trop à faire aujourd'hui
|
||||
|
@ -153,11 +137,17 @@ CLOCK: [2022-06-08 Wed 09:37]--[2022-06-08 Wed 09:59] => 0:22
|
|||
* Famille :family:
|
||||
** Daily :daily:
|
||||
*** TODO Attention gentille
|
||||
SCHEDULED: <2023-02-23 Thu .+1d>
|
||||
SCHEDULED: <2023-04-05 Wed .+1d>
|
||||
:PROPERTIES:
|
||||
:LAST_REPEAT: [2023-02-22 Wed 18:36]
|
||||
:LAST_REPEAT: [2023-04-04 Tue 22:57]
|
||||
:END:
|
||||
:LOGBOOK:
|
||||
- State "DONE" from "TODO" [2023-04-04 Tue 22:57]
|
||||
- State "DONE" from "TODO" [2023-03-31 Fri 14:07]
|
||||
- State "DONE" from "TODO" [2023-03-27 Mon 10:57]
|
||||
- State "DONE" from "TODO" [2023-03-20 Mon 10:01]
|
||||
- State "DONE" from "TODO" [2023-03-10 Fri 10:08]
|
||||
- State "DONE" from "TODO" [2023-03-07 Tue 16:16]
|
||||
- State "DONE" from "TODO" [2023-02-22 Wed 18:36]
|
||||
- State "DONE" from "TODO" [2023-02-21 Tue 14:21]
|
||||
- State "DONE" from "TODO" [2023-02-17 Fri 08:57]
|
||||
|
@ -176,12 +166,14 @@ SCHEDULED: <2023-02-23 Thu .+1d>
|
|||
:END:
|
||||
** Weekly :weekly:
|
||||
*** TODO litieres
|
||||
DEADLINE: <2023-03-03 Fri .+2w -1d>
|
||||
DEADLINE: <2023-04-18 Tue .+2w -1d>
|
||||
:PROPERTIES:
|
||||
:LAST_REPEAT: [2023-02-17 Fri 14:33]
|
||||
:LAST_REPEAT: [2023-04-04 Tue 22:57]
|
||||
:STYLE: habit
|
||||
:END:
|
||||
:LOGBOOK:
|
||||
- State "CANCELED" from "TODO" [2023-04-04 Tue 22:57]
|
||||
- State "DONE" from "TODO" [2023-03-20 Mon 09:59]
|
||||
- State "DONE" from "TODO" [2023-02-17 Fri 14:33]
|
||||
- State "DONE" from "TODO" [2023-01-23 Mon 17:33]
|
||||
- State "DONE" from "TODO" [2023-01-04 Wed 10:50]
|
||||
|
@ -225,24 +217,28 @@ DEADLINE: <2023-03-03 Fri .+2w -1d>
|
|||
Done not so long ago
|
||||
:END:
|
||||
*** TODO Appeler Papa
|
||||
SCHEDULED: <2023-02-20 Mon 14:00 .+1w>
|
||||
SCHEDULED: <2023-03-14 Tue 14:00 .+1w>
|
||||
:PROPERTIES:
|
||||
:STYLE: habit
|
||||
:LAST_REPEAT: [2023-02-13 Mon 10:02]
|
||||
:LAST_REPEAT: [2023-03-07 Tue 17:09]
|
||||
:END:
|
||||
:LOGBOOK:
|
||||
- State "DONE" from "TODO" [2023-03-07 Tue 17:09]
|
||||
- State "DONE" from "TODO" [2023-02-13 Mon 10:02]
|
||||
- State "DONE" from "TODO" [2023-01-23 Mon 17:31]
|
||||
- State "DONE" from "TODO" [2023-01-04 Wed 10:49]
|
||||
- State "DONE" from "TODO" [2022-12-02 Fri 19:10]
|
||||
:END:
|
||||
*** TODO Appeler Maman
|
||||
SCHEDULED: <2023-02-15 Wed 12:00 .+1w>
|
||||
SCHEDULED: <2023-04-07 Fri 12:00 .+1w>
|
||||
:PROPERTIES:
|
||||
:STYLE: habit
|
||||
:LAST_REPEAT: [2023-02-08 Wed 14:16]
|
||||
:LAST_REPEAT: [2023-03-31 Fri 14:07]
|
||||
:END:
|
||||
:LOGBOOK:
|
||||
- State "DONE" from "TODO" [2023-03-31 Fri 14:07]
|
||||
- State "DONE" from "TODO" [2023-03-20 Mon 10:00]
|
||||
- State "DONE" from "TODO" [2023-03-07 Tue 17:10]
|
||||
- State "DONE" from "TODO" [2023-02-08 Wed 14:16]
|
||||
- State "DONE" from "TODO" [2023-01-31 Tue 13:14]
|
||||
- State "DONE" from "TODO" [2023-01-24 Tue 15:15]
|
||||
|
@ -283,11 +279,12 @@ SCHEDULED: <2023-09-19 Tue +1y>
|
|||
:END:
|
||||
[2020-05-23 Sat 10:32]
|
||||
*** TODO [#A] Cadeau Rencontre Krystelle (1995) :yearly:
|
||||
DEADLINE: <2023-04-08 Sat +1y -2w>
|
||||
DEADLINE: <2024-04-08 Mon +1y -2w>
|
||||
:PROPERTIES:
|
||||
:LAST_REPEAT: [2022-04-07 Thu 11:56]
|
||||
:LAST_REPEAT: [2023-04-04 Tue 22:57]
|
||||
:END:
|
||||
:LOGBOOK:
|
||||
- State "DONE" from "TODO" [2023-04-04 Tue 22:57]
|
||||
- State "DONE" from "TODO" [2022-04-07 Thu 11:56]
|
||||
:END:
|
||||
*** TODO [#A] Cadeau Mariage Krystelle (2000) :yearly:
|
||||
|
@ -459,4 +456,120 @@ CLOCK: [2020-09-01 Tue 12:13]--[2020-09-01 Tue 12:13] => 0:00
|
|||
|
||||
#+begin_comment
|
||||
- =SPC m s c=
|
||||
=- org-clone-subtree-with-time-shift= #+end_comment
|
||||
=- org-clone-subtree-with-time-shift=
|
||||
#+end_comment
|
||||
* IN-PROGRESS Answer to Austin Haas about clients :chore:
|
||||
:LOGBOOK:
|
||||
CLOCK: [2023-03-09 Thu 11:03]--[2023-03-09 Thu 17:06] => 6:03
|
||||
:END:
|
||||
[2023-03-09 Thu 11:03]
|
||||
|
||||
Just by looking I think some client probably disappeared (in TEST).
|
||||
From what I looking for, most clients belongs to Chris Sims who created specific
|
||||
Orgs in all ENV to create the modules.
|
||||
|
||||
#+begin_src
|
||||
NAM
|
||||
module-id: d80e8041-e8ed-4d42-9b4c-7b0a7a4a6d1b
|
||||
client-id: client-d8d91871-2735-43e6-bfca-ed4cb6b89f23
|
||||
|
||||
{
|
||||
"scopes": [
|
||||
"integration/module-type",
|
||||
"admin/integration/module-type:write"
|
||||
],
|
||||
"description": "Used to create and update the Threat Grid SecureX module type.",
|
||||
"approved?": true,
|
||||
"redirects": [],
|
||||
"availability": "org",
|
||||
"password": "$s0$f0801$MG1GFImf7eHwuRKfqg8H+w==$W2h47bWx0Q3rTRjfidgSXvA+cGCC7b1AeqCh+z30978=",
|
||||
"name": "TG Module Creation/Updates",
|
||||
"org-id": "964a8c3b-9aef-4e1d-aadf-e2754004d230",
|
||||
"enabled?": true,
|
||||
"grants": [
|
||||
"client-creds"
|
||||
],
|
||||
"client-type": "confidential",
|
||||
"id": "client-d8d91871-2735-43e6-bfca-ed4cb6b89f23",
|
||||
"approval-status": "approved",
|
||||
"owner-id": "2f6ccd76-270e-4785-a33f-ea24400bc5a5",
|
||||
"created-at": "2020-05-11T22:13:49.892Z"
|
||||
}
|
||||
belongs to Chris Sims
|
||||
#+end_src
|
||||
|
||||
#+begin_src
|
||||
EU
|
||||
module-id: 28ef9a98-cd14-4a11-a2eb-6b80c5bb82fe
|
||||
client-id: client-6f81864f-04e1-444a-ac92-e242797ed12f
|
||||
|
||||
|
||||
{
|
||||
"scopes": [
|
||||
"integration/module-type",
|
||||
"admin/integration/module-type:write"
|
||||
],
|
||||
"description": "Used to create and update the Threat Grid SecureX module type.",
|
||||
"approved?": true,
|
||||
"redirects": [],
|
||||
"availability": "org",
|
||||
"password": "$s0$f0801$7G0SDYzMCP2zNbDhi37Ahg==$ijMPk/LtBcTZlsifNl571QDOfxX4lQzcsIOFJYgnF3A=",
|
||||
"name": "TG Module Creation/Updates",
|
||||
"org-id": "99c5cf95-7788-4ce1-906f-86811aa57752",
|
||||
"enabled?": true,
|
||||
"grants": [
|
||||
"client-creds"
|
||||
],
|
||||
"client-type": "confidential",
|
||||
"id": "client-6f81864f-04e1-444a-ac92-e242797ed12f",
|
||||
"approval-status": "approved",
|
||||
"owner-id": "3f6edf85-9ad3-4098-be43-0b46d117f9ca",
|
||||
"created-at": "2020-05-11T22:08:04.428Z"
|
||||
}
|
||||
#+end_src
|
||||
|
||||
#+begin_src
|
||||
APJC
|
||||
module-id: f82062a6-5b17-4943-b67e-2555bbcc95d4
|
||||
client-id: client-73096290-4908-4a9a-bf0c-b29337ae58f6
|
||||
|
||||
{
|
||||
"scopes": [
|
||||
"integration/module-type",
|
||||
"admin/integration/module-type:write"
|
||||
],
|
||||
"description": "Used to create and update the Threat Grid SecureX module type.",
|
||||
"approved?": true,
|
||||
"redirects": [],
|
||||
"availability": "org",
|
||||
"password": "$s0$f0801$qCVLku7mTWOAdzqWoMV/yA==$BTeIKEL2EcHdL0/wR4Q5CfYHjDlinDhiTSaGN0fXJKg=",
|
||||
"name": "TG Module Creation/Updates",
|
||||
"org-id": "4f169b08-bb0d-4e97-a358-8fd3fd819066",
|
||||
"enabled?": true,
|
||||
"grants": [
|
||||
"client-creds"
|
||||
],
|
||||
"client-type": "confidential",
|
||||
"id": "client-73096290-4908-4a9a-bf0c-b29337ae58f6",
|
||||
"approval-status": "approved",
|
||||
"owner-id": "fe332b50-62ae-4ac9-8eb0-4b9b39565bfc",
|
||||
"created-at": "2020-05-11T22:17:37.247Z"
|
||||
}
|
||||
|
||||
owned by:
|
||||
|
||||
"user-email": "chrsims+apjc_modules@cisco.com",
|
||||
"user-name": "Chris Sims"
|
||||
from Org: 4f169b08-bb0d-4e97-a358-8fd3fd819066
|
||||
named: "Cisco Modules"
|
||||
#+end_src
|
||||
* *Declarer sinistre Aygo Assurance
|
||||
* DONE commander gâteau
|
||||
SCHEDULED: <2023-04-06 Thu 11:30>
|
||||
[2023-04-06 Thu 07:23]
|
||||
* DONE goûter chocolat
|
||||
SCHEDULED: <2023-04-06 Thu 11:30>
|
||||
[2023-04-06 Thu 07:24]
|
||||
* TODO acheter lapins lindt
|
||||
SCHEDULED: <2023-04-06 Thu 11:30>
|
||||
[2023-04-06 Thu 07:25]
|
||||
|
|
|
@ -10,100 +10,15 @@
|
|||
TL;DR: This is how I created a =docker-compose= replacement with ~nix-shell~.
|
||||
Here is a solution to have a composable nix shell representation focused on
|
||||
replacing =docker-compose=.
|
||||
Here is the main code:
|
||||
|
||||
#+begin_src nix
|
||||
# imports should contain a list of nix files
|
||||
{ pkgs, imports }:
|
||||
let confs = map (f: import f { inherit pkgs; }) imports;
|
||||
envs = map ({env ? {}}: env) confs;
|
||||
# list the name of a command to stop
|
||||
stops = map ({stop ? ":"}: stop) confs;
|
||||
# we want to stop all services on exit
|
||||
lastConfs = { shellHook = "stopall(){ " + builtins.foldl' (acc: stop: acc + " && " + stop) "" stops + "}" +
|
||||
''
|
||||
trap stopall EXIT
|
||||
'';
|
||||
};
|
||||
mergedEnvs = builtins.foldl' (acc: e: acc // e) {} envs;
|
||||
zeroConf = {};
|
||||
mergedConfs = builtins.foldl' (acc: {buildInputs ? [], nativeBuildInputs ? [], shellHook ? "", ...}:
|
||||
{ buildInputs = acc.buildInputs ++ buildInputs;
|
||||
nativeBuildInputs = acc.nativeBuildInputs ++ nativeBuildInputs;
|
||||
shellHook = acc.shellHook + shellHook;
|
||||
}) zeroConf (confs);
|
||||
in (mergedEnvs // mergedConfs)
|
||||
#+end_src
|
||||
|
||||
#+begin_src nix
|
||||
# example of nix file to be used as import
|
||||
{ pkgs ? import (fetchTarball https://github.com/NixOS/nixpkgs/archive/22.11.tar.gz) {} }:
|
||||
let iport = 16380;
|
||||
port = toString iport;
|
||||
env = {
|
||||
redisConf =
|
||||
pkgs.writeText "redis.conf"
|
||||
''
|
||||
port ${port}
|
||||
dbfilename redis.db
|
||||
dir ${toString ./.}/.redis
|
||||
logfile redis.log
|
||||
'';
|
||||
|
||||
# ENV Variables
|
||||
REDIS_DATA = "${toString ./.}/.redis";
|
||||
};
|
||||
in env // {
|
||||
# Warning if you add an attribute like an ENV VAR you must do it via env.
|
||||
inherit env;
|
||||
nativeBuildInputs = [
|
||||
pkgs.redis
|
||||
];
|
||||
|
||||
# Post Shell Hook
|
||||
shellHook = ''
|
||||
echo "Using ${pkgs.redis.name}. port: ${port}"
|
||||
|
||||
[ ! -d $REDIS_DATA ] \
|
||||
&& mkdir -p $REDIS_DATA
|
||||
cat "$redisConf" > $REDIS_DATA/redis.conf
|
||||
function redisstop {
|
||||
echo 'Stopping and Cleaning up Redis'
|
||||
redis-cli -p ${port} shutdown && \
|
||||
rm -rf $REDIS_DATA
|
||||
}
|
||||
nohup redis-server $REDIS_DATA/redis.conf > /dev/null &
|
||||
trap redisstop EXIT
|
||||
'';
|
||||
# the function to call on EXIT
|
||||
stop = "redisstop";
|
||||
}
|
||||
#+end_src
|
||||
|
||||
** Introduction
|
||||
|
||||
So I work on a project for which we used Docker to locally run integration tests.
|
||||
More precisely we used =docker-compose= to launch different services, most of them
|
||||
being databases.
|
||||
The project is big enough that we need many different databases and other services.
|
||||
At work we use =docker-compose= to run integration tests on a big project that need
|
||||
to connect to multiple different databases as well as a few other services.
|
||||
|
||||
It's been a while that I am following nix, and in particular I use nix on macOS
|
||||
to create local development environments.
|
||||
But I never used NixOS, even if I plan to do so on my remote server.
|
||||
In fact, I use nix on a very old Linux distro to run recent softwares.
|
||||
|
||||
Anyway, after Docker started to change its licensing on macOS I wanted to get
|
||||
rid of it. In fact, even before the licensing issue, I wanted to get rid of
|
||||
docker for Mac.
|
||||
|
||||
So I tried many time to replace =docker-compose= by =nix=.
|
||||
And even if I am interested in nix I never really dug into it. So my
|
||||
knowledge about it is incomplete and imprecise.
|
||||
But I know just enough to be able to start write script with nix taking care of
|
||||
dependencies, and similarly, I can write quick and dirty =shell.nix= for all my
|
||||
personal projects. Recently I started to add =flake.nix= files around too.
|
||||
|
||||
So here is how to easily replace docker-compose with nix. Which should also compose.
|
||||
This article is about how to replace =docker-compose= by =nix= for a local dev
|
||||
environment.
|
||||
|
||||
** =nix-shell-fu= level 1 lesson
|
||||
|
||||
|
@ -275,7 +190,7 @@ Using redis-6.2.3 on port 16380
|
|||
1785:M 10 Feb 2023 20:50:00.881 * Ready to accept connections
|
||||
#+end_src
|
||||
|
||||
Woo! Now we can control the port from the file.
|
||||
Woo! We control the port from the file.
|
||||
That's nice.
|
||||
But, hmmm, has you might have noticed, when you quit the session it dumps the DB
|
||||
as the file =dump.rdb=.
|
||||
|
@ -288,7 +203,7 @@ file and declare a directory that will contain all the state of the DB and of
|
|||
the nix configuration.
|
||||
|
||||
#+begin_src nix
|
||||
{ pkgs ? import (fetchTarball https://github.com/NixOS/nixpkgs/archive/21.05.tar.gz) {} }:
|
||||
{ pkgs ? import (fetchTarball https://github.com/NixOS/nixpkgs/archive/22.11.tar.gz) {} }:
|
||||
let iport = 16380;
|
||||
port = toString iport;
|
||||
in pkgs.mkShell (rec {
|
||||
|
@ -315,6 +230,7 @@ in pkgs.mkShell (rec {
|
|||
alias redisstop="echo 'Stopping Redis'; redis-cli -p ${port} shutdown; rm -rf $REDIS_DATA"
|
||||
nohup redis-server $REDIS_DATA/redis.conf > /dev/null 2>&1 &
|
||||
echo "When finished just run redisstop && exit"
|
||||
trap redisstop EXIT
|
||||
'';
|
||||
})
|
||||
#+end_src
|
||||
|
@ -373,12 +289,146 @@ redis, then purge all redis related data (as you would like in a development env
|
|||
Also, as compared to previous version, redis is launched in background so you
|
||||
could run commands in your nix shell.
|
||||
|
||||
Notice I also run ~redisstop~ command on exit of the nix-shell. So when you close
|
||||
the nix-shell redis is stopped and the DB state is cleaned up.
|
||||
|
||||
** =nix-shell-fu= level 3 lesson; composability
|
||||
|
||||
So in order for this part to be easier to follow, we'll go back to our first
|
||||
example with the shell.nix that just ran hello.
|
||||
Imagine we create another similar nix file, but this time to launch postgresql.
|
||||
Roughtly, you will again build a nix set, that will contain a few env variables,
|
||||
along the following entries =buildInputs=, =nativeBuildInputs= and =shellHook=.
|
||||
|
||||
** Appendice
|
||||
The issue is that in both nix files you will have the following form:
|
||||
|
||||
#+begin_src nix
|
||||
{ pkgs ? import ( ... ) {} }:
|
||||
mkShell { PGDATA = ...;
|
||||
buildInputs = [ dependency-1 ... dependency-n ];
|
||||
nativeBuildInputs = [ dependency-1 ... dependency-n ];
|
||||
shellHook = '' ... '';
|
||||
}
|
||||
#+end_src
|
||||
|
||||
And you cannot use that directly.
|
||||
So to solve the problem, instead we will replace this format by removing =mkShell=
|
||||
and pass the mkShell parameter instead.
|
||||
We also need to be more precise about where are declared the environment
|
||||
variables.
|
||||
|
||||
#+begin_src nix
|
||||
{ pkgs ? import ( ... ) {} }:
|
||||
let env = { PGDATA = ...; }
|
||||
in { inherit env; # equivalent to env = env;
|
||||
buildInputs = [ dependency-1 ... dependency-n ];
|
||||
nativeBuildInputs = [ dependency-1 ... dependency-n ];
|
||||
shellHook = '' ... '';
|
||||
}
|
||||
#+end_src
|
||||
|
||||
With this, we can compose two nix set into a single merged one that will be
|
||||
suitable for argument of mkShell.
|
||||
Another minor detail, but important one. In bash, the command ~trap~ do not
|
||||
accumulate but replace the function. For our need, we want to run all stop
|
||||
function on exit. So the ~trap~ directive added in the shell hook does not compose
|
||||
naturally. This is why we add a =stop= value that will contain the name of the
|
||||
bash function to call to stop and cleanup a service.
|
||||
|
||||
Finally the main structure for each of our service will look like:
|
||||
|
||||
#+begin_src nix
|
||||
{ pkgs ? import ( ... ) {} }:
|
||||
let env = { PGDATA = ...; }
|
||||
in { inherit env; # equivalent to env = env;
|
||||
buildInputs = [ dependency-1 ... dependency-n ];
|
||||
nativeBuildInputs = [ dependency-1 ... dependency-n ];
|
||||
shellHook = '' ... '';
|
||||
stop = "stoppostgres"
|
||||
}
|
||||
#+end_src
|
||||
|
||||
Mainly to merge we will just need to run:
|
||||
|
||||
#+begin_src nix
|
||||
{ pkgs ? import (...) {}}:
|
||||
let
|
||||
# merge all the env sets
|
||||
mergedEnvs = builtins.foldl' (acc: e: acc // e) {} envs;
|
||||
# merge all the confs by accumulating the dependencies
|
||||
# and concatenating the shell hooks.
|
||||
mergedConfs =
|
||||
builtins.foldl'
|
||||
(acc: {buildInputs ? [], nativeBuildInputs ? [], shellHook ? "", ...}:
|
||||
{ buildInputs = acc.buildInputs ++ buildInputs;
|
||||
nativeBuildInputs = acc.nativeBuildInputs ++ nativeBuildInputs;
|
||||
shellHook = acc.shellHook + shellHook;
|
||||
})
|
||||
emptyConf
|
||||
confs;
|
||||
in mkShell (mergedEnvs // mergedConfs)
|
||||
#+end_src
|
||||
|
||||
The full solution to deal with other minor details like importing the files,
|
||||
dealing with the exit of the shell is here:
|
||||
|
||||
#+begin_src nix
|
||||
{ mergeShellConfs =
|
||||
# imports should contain a list of nix files
|
||||
{ pkgs, imports }:
|
||||
let confs = map (f: import f { inherit pkgs; }) imports;
|
||||
envs = map ({env ? {}, ...}: env) confs;
|
||||
# list the name of a command to stop a service (if none provided just use ':' which mean noop)
|
||||
stops = map ({stop ? ":", ...}: stop) confs;
|
||||
# we want to stop all services on exit
|
||||
stopCmd = builtins.concatStringsSep " && " stops;
|
||||
# we would like to add a shellHook to cleanup the service that will call
|
||||
# all cleaning-up function declared in sub-shells
|
||||
lastConf =
|
||||
{ shellHook = ''
|
||||
stopall() { ${stopCmd}; }
|
||||
echo "You can manually stop all services by calling stopall"
|
||||
trap stopall EXIT
|
||||
'';
|
||||
};
|
||||
# merge Environment variables needed for other shell environments
|
||||
mergedEnvs = builtins.foldl' (acc: e: acc // e) {} envs;
|
||||
# zeroConf is the minimal empty configuration needed
|
||||
zeroConf = {buildInputs = []; nativeBuildInputs = []; shellHook="";};
|
||||
# merge all confs by appending buildInputs and nativeBuildInputs
|
||||
# and by concatenating the shellHooks
|
||||
mergedConfs =
|
||||
builtins.foldl'
|
||||
(acc: {buildInputs ? [], nativeBuildInputs ? [], shellHook ? "", ...}:
|
||||
{ buildInputs = acc.buildInputs ++ buildInputs;
|
||||
nativeBuildInputs = acc.nativeBuildInputs ++ nativeBuildInputs;
|
||||
shellHook = acc.shellHook + shellHook;
|
||||
})
|
||||
zeroConf
|
||||
(confs ++ [lastConf]);
|
||||
in (mergedEnvs // mergedConfs);
|
||||
}
|
||||
#+end_src
|
||||
|
||||
So I put this function declaration in a file named =./nix/merge-shell.nix=.
|
||||
And I have a =pg.nix= as well as a =redis.nix= file in the =nix= directory.
|
||||
On the root of the project the main =shell.nix= looks like:
|
||||
|
||||
#+begin_src nix
|
||||
{ pkgs ? import (fetchTarball https://github.com/NixOS/nixpkgs/archive/22.11.tar.gz) {} }:
|
||||
let
|
||||
# we import the file, and rename the function mergeShellConfs as mergeShells
|
||||
mergeShells = (import ./nix/merge-shell.nix).mergeShellConfs;
|
||||
# we call mergeShells
|
||||
mergedShellConfs =
|
||||
mergeShells { inherit pkgs;
|
||||
# imports = [ ./nix/pg.nix ./nix/redis.nix ];
|
||||
imports = [ ./nix/pg.nix ./nix/redis.nix ];
|
||||
};
|
||||
in pkgs.mkShell mergedShellConfs
|
||||
#+end_src
|
||||
|
||||
And, that's it.
|
||||
|
||||
** Appendix
|
||||
|
||||
*** <<digression>> Digression
|
||||
|
||||
|
@ -390,7 +440,7 @@ But here, this block represent a function.
|
|||
The function takes as input a "nix set" (which you can see as an associative
|
||||
array, or a hash-map or also a javascript object depending on your preference),
|
||||
and this set is expected to contain a field named =pkgs=. If =pkgs= is not provided,
|
||||
it will us the set from the stable version 22.11 of nixpkgs by downloading them
|
||||
it will use the set from the stable version 22.11 of nixpkgs by downloading them
|
||||
from github archive.
|
||||
The second part of the function generate "something" that is returned by an
|
||||
internal function of the standard library provided by =nix= which is named
|
||||
|
@ -407,3 +457,106 @@ mechanism to manipulate directly =derivation=. So in order to make that
|
|||
composable, you need to call the =derivation= internal function at the very end only.
|
||||
|
||||
The argument of all these functions are /nix sets/
|
||||
*** The full nix files for postgres
|
||||
|
||||
For postgres:
|
||||
|
||||
#+begin_src nix
|
||||
{ pkgs ? import (fetchTarball https://github.com/NixOS/nixpkgs/archive/22.11.tar.gz) {} }:
|
||||
let iport = 15432;
|
||||
port = toString iport;
|
||||
pguser = "pguser";
|
||||
pgpass = "pgpass";
|
||||
pgdb = "iroh";
|
||||
# env should contain all variable you need to configure correctly mkShell
|
||||
# so ENV_VAR, but also any other kind of variables.
|
||||
env = {
|
||||
postgresConf =
|
||||
pkgs.writeText "postgresql.conf"
|
||||
''
|
||||
# Add Custom Settings
|
||||
log_min_messages = warning
|
||||
log_min_error_statement = error
|
||||
log_min_duration_statement = 100 # ms
|
||||
log_connections = on
|
||||
log_disconnections = on
|
||||
log_duration = on
|
||||
#log_line_prefix = '[] '
|
||||
log_timezone = 'UTC'
|
||||
log_statement = 'all'
|
||||
log_directory = 'pg_log'
|
||||
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
|
||||
logging_collector = on
|
||||
log_min_error_statement = error
|
||||
'';
|
||||
|
||||
postgresInitScript =
|
||||
pkgs.writeText "init.sql"
|
||||
''
|
||||
CREATE DATABASE ${pgdb};
|
||||
CREATE USER ${pguser} WITH ENCRYPTED PASSWORD '${pgpass}';
|
||||
GRANT ALL PRIVILEGES ON DATABASE ${pgdb} TO ${pguser};
|
||||
'';
|
||||
|
||||
PGDATA = "${toString ./.}/.pg";
|
||||
};
|
||||
in env // {
|
||||
# Warning if you add an attribute like an ENV VAR you must do it via env.
|
||||
inherit env;
|
||||
# must contain buildInputs, nativeBuildInputs and shellHook
|
||||
buildInputs = [ pkgs.coreutils
|
||||
pkgs.jdk11
|
||||
pkgs.lsof
|
||||
pkgs.plantuml
|
||||
pkgs.leiningen
|
||||
];
|
||||
nativeBuildInputs = [
|
||||
pkgs.zsh
|
||||
pkgs.vim
|
||||
pkgs.nixpkgs-fmt
|
||||
pkgs.postgresql_11
|
||||
|
||||
# postgres-11 with postgis support
|
||||
# (pkgs.postgresql_11.withPackages (p: [ p.postgis ]))
|
||||
];
|
||||
|
||||
# Post Shell Hook
|
||||
shellHook = ''
|
||||
echo "Using ${pkgs.postgresql_12.name}. port: ${port} user: ${pguser} pass: ${pgpass}"
|
||||
|
||||
# Setup: other env variables
|
||||
export PGHOST="$PGDATA"
|
||||
# Setup: DB
|
||||
[ ! -d $PGDATA ] \
|
||||
&& pg_ctl initdb -o "-U postgres" \
|
||||
&& cat "$postgresConf" >> $PGDATA/postgresql.conf
|
||||
pg_ctl -o "-p ${port} -k $PGDATA" start
|
||||
echo "Creating DB and User"
|
||||
psql -U postgres -p ${port} -f $postgresInitScript
|
||||
|
||||
function pgstop {
|
||||
echo "Stopping and Cleaning up Postgres";
|
||||
pg_ctl stop && rm -rf $PGDATA
|
||||
}
|
||||
|
||||
alias pg="psql -p ${port} -U postgres"
|
||||
echo "Send SQL commands with pg"
|
||||
trap pgstop EXIT
|
||||
'';
|
||||
stop = "pgstop";
|
||||
}
|
||||
#+end_src
|
||||
|
||||
And to just launch Posgresql, there is also this file =./nix/pgshell.nix=, that
|
||||
simply contains
|
||||
|
||||
#+begin_src nix
|
||||
{ pkgs ? import (fetchTarball https://github.com/NixOS/nixpkgs/archive/22.11.tar.gz) {} }:
|
||||
let pg = import ./pg.nix { inherit pkgs; };
|
||||
in with pg; pkgs.mkShell ( env //
|
||||
{
|
||||
buildInputs = buildInputs;
|
||||
nativeBuildInputs = nativeBuildInputs ;
|
||||
shellHook = shellHook;
|
||||
})
|
||||
#+end_src
|
||||
|
|
57
notes/permission_outside_scopes.org
Normal file
57
notes/permission_outside_scopes.org
Normal file
|
@ -0,0 +1,57 @@
|
|||
:PROPERTIES:
|
||||
:ID: 8c6d80b5-dc83-40ee-b187-4b0427c77f78
|
||||
:END:
|
||||
#+title: Permissions outside scopes
|
||||
#+Author: Yann Esposito
|
||||
#+Date: [2023-03-10]
|
||||
|
||||
- tags :: [[id:ce893df9-32a4-44e0-9eb5-b9817141ee6a][cisco]] [[id:299643a7-00e5-47fb-a987-3b9278e89da3][Auth]]
|
||||
- source ::
|
||||
|
||||
This was really interesting and this question about when to use or not scopes is
|
||||
generally a recurring one.
|
||||
So I should probably try to explain it more clearly.
|
||||
Perhaps I would need to write a doc, but if I try to make it easier to
|
||||
understand, maybe we can think about it this way.
|
||||
|
||||
Scopes are permissions that we can control via the OAuth2 clients. So when we
|
||||
put the permission inside scopes we gain:
|
||||
|
||||
- the ability to restrict the permission for some clients (for example we will
|
||||
be able to restrict DI access to some client without restricting access to
|
||||
Secure Client while the user can access both)
|
||||
- checking for permission is easier because all permission are centralized in
|
||||
the scopes, always. This has consequences for the API as well as for the UIs
|
||||
but also for all external clients, so the permission can be enforced and
|
||||
published at the API level.
|
||||
|
||||
If we plan to have another set of permission outside the scopes, say, have a
|
||||
list of permission in another entity (like in the entitlement of the Org, or
|
||||
something related):
|
||||
|
||||
- In this case, the UI will need to check both the scopes and this new values.
|
||||
Knowing that the structure of such list of permission will be pretty similar
|
||||
to the structure of the scopes (mainly a list of string that represent
|
||||
permissions). The clients will not easily be able to know if they can access
|
||||
some resource or not.
|
||||
- Internally, every API access permission only uses scopes, that would mean we
|
||||
will need to add another independent layer of checking that could cause
|
||||
confusion in the code, probably will have a non negligible impact on the
|
||||
performance of every API call (as we will need to check more than the scope,
|
||||
every API call will also need to perform addition call to the DB)
|
||||
- We can no longer express that an OAuth2 Client is restricted to use some apps
|
||||
(like if we change the entitlement, we can no longer restrict that client not
|
||||
to use some app)
|
||||
- With RBAC I see more and more concern about handling permission of external
|
||||
applications via IROH, so here too, it is easier to handle via scopes
|
||||
- Every client (not just UI and IROH) will need to check two different set of
|
||||
permissions if they want to understand what is allowed to them or not. Mainly
|
||||
instead of just checking scopes, they will also need to check another
|
||||
permission system with potentially different access rules.
|
||||
|
||||
Everything about it is quite technical and not easy to convey in a discussion.
|
||||
But this is why I might need to write this down somewhere to explain the
|
||||
advantages and drawbacks of using another dimension for permissions.
|
||||
A good usage of not using scopes for some kind of permission are the audiences
|
||||
because this is not about the User's permission, but about the Client permission
|
||||
that still is granted to be used by some User.
|
508
tracker.org
508
tracker.org
File diff suppressed because it is too large
Load diff
Loading…
Reference in a new issue