Blog

  • AnsibleHardening

    Hardening Linux Server with Ansible …

    alt text

    Ansible is an open source IT automation engine that automates provisioning, configuration management, application deployment, orchestration, and many other IT processes. It is free to use, and the project benefits from the experience and intelligence of its thousands of contributors. What We Did In this article, we covered essential practices for hardening both the operating system and SSH configurations to enhance security. Ansible Code Structuring Organized our Ansible playbooks and roles for streamlined configuration management. Established a clear structure to facilitate scalability and reusability of automation code. Operating System Hardening Key steps for securing the OS, including system updates, disabling unnecessary services, and securing system configurations. SSH Hardening Configurations to strengthen SSH security, such as limiting access, enforcing strong authentication, and other best practices. We want to run OS hardening on Server01 from DesktopTest ubuntu

    At first we must install ansible on the DesktopTest :

    sudo apt-add-repository ppa:ansible/ansible
    
    sudo apt update
    
    sudo apt install ansible
    
    ansible --version
    

    Note : python must be installed at any linux and consider that we have one linux for source (ansible management node) where the ansible is installed on it and one or many linux for destination that must be harden by ansible.

    Important : management node must can ssh to any destination node passwordlesslly.

    for this reason, we generate a ssh key on management node and copy it to any destination nodes.

    on management node :

    ssh-keygen -t rsa -b 4096
    

    alt text

    Generated keys at /root/.ssh :

    alt text

    Copy the public key (id_rsa.pub) to other nodes :

    ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.56.151
    

    alt text

    Check the public key on management and destination node …

    management node (DesktopTest) :

    alt text

    destination node (Server01) :

    alt text

    Test the ssh login passwordlesslly :

    ssh root@192.168.56.151
    

    alt text

    We use some of the rules from the Link ansible-collection-hardening.

    then check the ansible project and files at management node :

    alt text

    All the project files has presented at my github.

    files on management node :

    alt text

    Edit the inventory file based on destination node(s) :

    all:
      vars:
        ansible_user: root
        ansible_port: 22
      children:
        single-node:
          hosts:
            192.168.56.151
    

    alt text

    then ping all destination host(s) to be ok before running the ansible playbooks :

    ansible all -m ping
    

    alt text

    then run the playbook :

    ansible-playbook -i inventory/RahBia.yml playbooks/hardening.yml
    

    alt text

    if any error happens, check and correct it and run the playbook again

    alt text

    Note : ansible process is idempotent, an idempotent operation is one that has no additional effect if it is called more than once with the same input parameters.

    ansible-playbook -i inventory/RahBia.yml playbooks/hardening.yml
    

    alt text

    for checking the hardening, install lynis on destination nodes :

    apt install lynis
    

    alt text

    run the lynis to audit the node :

    ./lynis audit system
    

    alt text

    at final, check the grade (84 % at our case) :

    alt text

    and check any warnings and suggestions :

    alt text

    you can change the destination ssh port (to 8090 in our case) and run the playbook with that port :

    alt text

    alt text

    alt text

    ansible-playbook -i inventory/RahBia.yml playbooks/hardening.yml
    

    alt text

    ssh root@192.168.56.151 -p 8090
    

    alt text

    ss -nltp | grep ssh
    

    alt text

    Visit original content creator repository https://github.com/kayvansol/AnsibleHardening
  • AnsibleHardening

    Hardening Linux Server with Ansible …

    alt text

    Ansible is an open source IT automation engine that automates provisioning, configuration management, application deployment, orchestration, and many other IT processes. It is free to use, and the project benefits from the experience and intelligence of its thousands of contributors. What We Did In this article, we covered essential practices for hardening both the operating system and SSH configurations to enhance security. Ansible Code Structuring Organized our Ansible playbooks and roles for streamlined configuration management. Established a clear structure to facilitate scalability and reusability of automation code. Operating System Hardening Key steps for securing the OS, including system updates, disabling unnecessary services, and securing system configurations. SSH Hardening Configurations to strengthen SSH security, such as limiting access, enforcing strong authentication, and other best practices. We want to run OS hardening on Server01 from DesktopTest ubuntu

    At first we must install ansible on the DesktopTest :

    sudo apt-add-repository ppa:ansible/ansible
    
    sudo apt update
    
    sudo apt install ansible
    
    ansible --version
    

    Note : python must be installed at any linux and consider that we have one linux for source (ansible management node) where the ansible is installed on it and one or many linux for destination that must be harden by ansible.

    Important : management node must can ssh to any destination node passwordlesslly.

    for this reason, we generate a ssh key on management node and copy it to any destination nodes.

    on management node :

    ssh-keygen -t rsa -b 4096
    

    alt text

    Generated keys at /root/.ssh :

    alt text

    Copy the public key (id_rsa.pub) to other nodes :

    ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.56.151
    

    alt text

    Check the public key on management and destination node …

    management node (DesktopTest) :

    alt text

    destination node (Server01) :

    alt text

    Test the ssh login passwordlesslly :

    ssh root@192.168.56.151
    

    alt text

    We use some of the rules from the Link ansible-collection-hardening.

    then check the ansible project and files at management node :

    alt text

    All the project files has presented at my github.

    files on management node :

    alt text

    Edit the inventory file based on destination node(s) :

    all:
      vars:
        ansible_user: root
        ansible_port: 22
      children:
        single-node:
          hosts:
            192.168.56.151
    

    alt text

    then ping all destination host(s) to be ok before running the ansible playbooks :

    ansible all -m ping
    

    alt text

    then run the playbook :

    ansible-playbook -i inventory/RahBia.yml playbooks/hardening.yml
    

    alt text

    if any error happens, check and correct it and run the playbook again

    alt text

    Note : ansible process is idempotent, an idempotent operation is one that has no additional effect if it is called more than once with the same input parameters.

    ansible-playbook -i inventory/RahBia.yml playbooks/hardening.yml
    

    alt text

    for checking the hardening, install lynis on destination nodes :

    apt install lynis
    

    alt text

    run the lynis to audit the node :

    ./lynis audit system
    

    alt text

    at final, check the grade (84 % at our case) :

    alt text

    and check any warnings and suggestions :

    alt text

    you can change the destination ssh port (to 8090 in our case) and run the playbook with that port :

    alt text

    alt text

    alt text

    ansible-playbook -i inventory/RahBia.yml playbooks/hardening.yml
    

    alt text

    ssh root@192.168.56.151 -p 8090
    

    alt text

    ss -nltp | grep ssh
    

    alt text

    Visit original content creator repository https://github.com/kayvansol/AnsibleHardening
  • obsidian-Smart2Brain

    2-05

    Your Smart Second Brain

    Your Smart Second Brain is a free and open-source Obsidian plugin to improve your overall knowledge management. It serves as your personal assistant, powered by large language models like ChatGPT or Llama2. It can directly access and process your notes, eliminating the need for manual prompt editing and it can operate completely offline, ensuring your data remains private and secure.

    S2B Chat

    🌟 Features

    📝 Chat with your Notes

    • RAG pipeline: All your notes will be embedded into vectors and then retrieved based on the similarity to your query in order to generate an answer based on the retrieved notes
    • Get reference links to notes: Because the answers are generated based on your retrieved notes we can trace where the information comes from and reference the origin of the knowledge in the answers as Obsidian links
    • Chat with LLM: You can disable the function to answer queries based on your notes and then all the answers generated are based on the chosen LLM’s training knowledge
    • Save chats: You can save your chats and continue the conversation at a later time
    • Different chat views: You can choose between two chat views: the ‘comfy’ and the ‘compact’ view

    🤖 Choose ANY preferred Large Language Model (LLM)

    • Ollama to integrate LLMs: Ollama is a tool to run LLMs locally. Its usage is similar to Docker, but it’s specifically designed for LLMs. You can use it as an interactive shell, through its REST API, or using it from a Python library.
    • Quickly switch between LLMs: Comfortably change between different LLMs for different purposes, for example changing from one for scientific writing to one for persuasive writing.
    • Use ChatGPT: Although, our focus lies on a privacy-focused AI Assistant you can still leverage OpenAI’s models and their advanced capabilities.

    ⚠️ Limitations

    • Performance depends on the chosen LLM: As LLMs are trained for different tasks, LLMs perform better or worse in embedding notes or generating answers. You can go with our recommendations or find your own best fit.
    • Quality depends on knowledge structure and organization: The response improves when you have a clear structure and do not mix unrelated information or connect unrelated notes. Therefore, we recommend a well-structured vault and notes.
    • AI Assistant might generate incorrect or irrelevant answers: Due to a lack of relevant notes or limitations of AI understanding the AI Assistant might generate unsatisfying answers. In those cases, we recommend rephrasing your query or describing the context in more detail

    🔧 Getting started

    Note

    If you use Obsidian Sync the vector store binaries might take up a lot of space due to the version history.
    Exclude the .obsidian/plugins/smart-second-brain/vectorstores folder in the Obsidian Sync settings to avoid this.

    Follow the onboarding instructions provided on initial plugin startup in Obsidian.

    ⚙️ Under the hood

    Check out our Architecture Wiki page and our backend repo papa-ts.

    🎯 Roadmap

    • Support Gemini and Claude models and OpenAI likes (Openrouter…)
    • Similar note connections view
    • Chat Threads
    • Hybrid Vector Search (e.g. for Time-based retrieval)
    • Inline AI Assistant
    • Predictive Note Placement
    • Agent with Obsidian tooling
    • Multimodality
    • Benchmarking

    🧑‍💻 About us

    We initially made this plugin as part of a university project, which is now complete. However, we are still fully committed to developing and improving the assistant in our spare time. This and the papa-ts (backend) repo serve as an experimental playground, allowing us to explore state-of-the-art AI topics further and as a tool to enrich the obsidian experience we’re so passionate about. If you have any suggestions or wish to contribute, we would greatly appreciate it.

    📢 You want to support?

    • Report issues or open a feature request here
    • Open a PR for code contributions (Development setup instructions TBD)

    ❓ FAQ

    Don’t hesitate to ask your question in the Q&A

    Are any queries sent to the cloud?

    The queries are sent to the cloud only if you choose to use OpenAI’s models. You can also choose Ollama to run your models locally. Therefore, your data will never be sent to any cloud services and stay on your machine.

    How does it differ from the SmartConnections plugin?

    Our plugin is quite similar to Smart Connections. However, we improve it based on our experience and the research we do for the university.

    For now, these are the main differences:

    • We are completely open-source
    • We support Ollama/local models without needing a license
    • We place more value on UI/UX
    • We use a different tech stack leveraging Langchain and Orama as our vector store
    • Under the hood, our RAG pipeline uses other techniques to process your notes like hierarchical tree summarization

    What models do you recommend?

    OpenAI’s models are still the most capable. Especially “GPT-4” and “text-embedding-3-large”. The best working local embedding modal we tested so far would be “mxbai-embed-large”.

    Does it support multi-language vaults?

    It’s supported, although the response quality may vary depending on which prompt language is used internally (we will support more translations in the future) and which models you use. It should work best with OpenAI’s “text-embedding-large-3” model.

    Visit original content creator repository https://github.com/your-papa/obsidian-Smart2Brain
  • obsidian-Smart2Brain

    2-05

    Your Smart Second Brain

    Your Smart Second Brain is a free and open-source Obsidian plugin to improve your overall knowledge management. It serves as your personal assistant, powered by large language models like ChatGPT or Llama2. It can directly access and process your notes, eliminating the need for manual prompt editing and it can operate completely offline, ensuring your data remains private and secure.

    S2B Chat

    🌟 Features

    📝 Chat with your Notes

    • RAG pipeline: All your notes will be embedded into vectors and then retrieved based on the similarity to your query in order to generate an answer based on the retrieved notes
    • Get reference links to notes: Because the answers are generated based on your retrieved notes we can trace where the information comes from and reference the origin of the knowledge in the answers as Obsidian links
    • Chat with LLM: You can disable the function to answer queries based on your notes and then all the answers generated are based on the chosen LLM’s training knowledge
    • Save chats: You can save your chats and continue the conversation at a later time
    • Different chat views: You can choose between two chat views: the ‘comfy’ and the ‘compact’ view

    🤖 Choose ANY preferred Large Language Model (LLM)

    • Ollama to integrate LLMs: Ollama is a tool to run LLMs locally. Its usage is similar to Docker, but it’s specifically designed for LLMs. You can use it as an interactive shell, through its REST API, or using it from a Python library.
    • Quickly switch between LLMs: Comfortably change between different LLMs for different purposes, for example changing from one for scientific writing to one for persuasive writing.
    • Use ChatGPT: Although, our focus lies on a privacy-focused AI Assistant you can still leverage OpenAI’s models and their advanced capabilities.

    ⚠️ Limitations

    • Performance depends on the chosen LLM: As LLMs are trained for different tasks, LLMs perform better or worse in embedding notes or generating answers. You can go with our recommendations or find your own best fit.
    • Quality depends on knowledge structure and organization: The response improves when you have a clear structure and do not mix unrelated information or connect unrelated notes. Therefore, we recommend a well-structured vault and notes.
    • AI Assistant might generate incorrect or irrelevant answers: Due to a lack of relevant notes or limitations of AI understanding the AI Assistant might generate unsatisfying answers. In those cases, we recommend rephrasing your query or describing the context in more detail

    🔧 Getting started

    Note

    If you use Obsidian Sync the vector store binaries might take up a lot of space due to the version history.
    Exclude the .obsidian/plugins/smart-second-brain/vectorstores folder in the Obsidian Sync settings to avoid this.

    Follow the onboarding instructions provided on initial plugin startup in Obsidian.

    ⚙️ Under the hood

    Check out our Architecture Wiki page and our backend repo papa-ts.

    🎯 Roadmap

    • Support Gemini and Claude models and OpenAI likes (Openrouter…)
    • Similar note connections view
    • Chat Threads
    • Hybrid Vector Search (e.g. for Time-based retrieval)
    • Inline AI Assistant
    • Predictive Note Placement
    • Agent with Obsidian tooling
    • Multimodality
    • Benchmarking

    🧑‍💻 About us

    We initially made this plugin as part of a university project, which is now complete. However, we are still fully committed to developing and improving the assistant in our spare time. This and the papa-ts (backend) repo serve as an experimental playground, allowing us to explore state-of-the-art AI topics further and as a tool to enrich the obsidian experience we’re so passionate about. If you have any suggestions or wish to contribute, we would greatly appreciate it.

    📢 You want to support?

    • Report issues or open a feature request here
    • Open a PR for code contributions (Development setup instructions TBD)

    ❓ FAQ

    Don’t hesitate to ask your question in the Q&A

    Are any queries sent to the cloud?

    The queries are sent to the cloud only if you choose to use OpenAI’s models. You can also choose Ollama to run your models locally. Therefore, your data will never be sent to any cloud services and stay on your machine.

    How does it differ from the SmartConnections plugin?

    Our plugin is quite similar to Smart Connections. However, we improve it based on our experience and the research we do for the university.

    For now, these are the main differences:

    • We are completely open-source
    • We support Ollama/local models without needing a license
    • We place more value on UI/UX
    • We use a different tech stack leveraging Langchain and Orama as our vector store
    • Under the hood, our RAG pipeline uses other techniques to process your notes like hierarchical tree summarization

    What models do you recommend?

    OpenAI’s models are still the most capable. Especially “GPT-4” and “text-embedding-3-large”. The best working local embedding modal we tested so far would be “mxbai-embed-large”.

    Does it support multi-language vaults?

    It’s supported, although the response quality may vary depending on which prompt language is used internally (we will support more translations in the future) and which models you use. It should work best with OpenAI’s “text-embedding-large-3” model.

    Visit original content creator repository https://github.com/your-papa/obsidian-Smart2Brain
  • Blog-App

    Blog App


    📗 Table of Contents

    📖 [Blog App]

    This is a fullstack blog application. A user can create an account to log in and post articles. In addition, each user is granted the chance to interact with other users posts by adding comments and liking posts. It uses crud methods to create, read, edit and delete posts.

    Tech Stack

    Client
    • HTML, CSS
    • JavaScript
    Server
    • Ruby on Rails
    Database

    Steps of creating the application:

    • 1: Creating a data model.
    • 2: Validations and Model specs.
    • 3: Processing data in models.
    • 4: Setup and controllers.
    • 5: Controllers specs.
    • 6: Views.
    • 7: Forms.
    • 8: Integration specs for Views and fixing n+1 problems.
    • 9: Add Devise.
    • 10: Add authorization rules.
    • 11: Add API endpoints.

    Key features :

    • Create an account
    • All users and their posts can be displayed
    • Create a post
    • Edit a post
    • Delete a post

    ERD diagram

    blog_app_erd

    Deployment

    [Video] Coming soon [Live demo] Coming soon

    🛠 Built With

    (back to top)

    💻 Getting Started

    Describe how a new developer could make use of your project.

    To get a local copy up and running, follow these steps.

    Prerequisites

    In order to run this project you need the following:

    • git
    • Ruby
    • rails
    • psql

    Setup

    Clone this repository:

      git clone https://github.com/TracyMuso/Blog-App.git

    Go to your project

      cd my-folder

    Install

    Important! You need to have rspec installed in your computer

    Install this projects dependencies with:

      bundle install

    Usage

    To run the project, execute the following command:

      rails server or rails s

    Run tests

    To run tests, run the following command:

      rspec spec spec/file_spec.rb

    👥 Authors

    👤 Tracy Musongole

    👤 Danny Baraka

    Future Features

    • Account creation

      • Users will be able to create accounts to log in or out
    • Post creation and interaction

      • Users will be able to create, read, edit and delete posts
      • Users will also be able to like and comment on each others posts

    (back to top)

    🤝 Contributing

    Contributions, issues, and feature requests are welcome!

    Feel free to check the issues page.

    (back to top)

    ⭐️ Show your support

    Write a message to encourage readers to support your project

    If you like this project…

    (back to top)

    🙏 Acknowledgments

    Give credit to everyone who inspired your codebase.

    I would like to thank…

    • Thanks to Microverse for giving this opportunity to learn …
    • Code Reviewers & Coding Partners.
    • Hat tip to anyone whose code was used.
    • Inspiration.

    (back to top)

    📝 License

    This project is MIT licensed.

    NOTE: we recommend using the MIT license – you can set it up quickly by using templates available on GitHub. You can also use any other license if you wish.

    (back to top)

    Visit original content creator repository https://github.com/TracyMuso/Blog-App
  • printf

    Visit original content creator repository
    https://github.com/jorgezafra94/printf

  • NFL_Predictive_Model_v2

    Table of Contents

    1. Installation
    2. Project Motivation
    3. File Descriptions
    4. Results
    5. Licensing, Authors, and Acknowledgements

    Installation

    There code only requires the standard installation of Anaconda Python. It will requires Pip to install the pandas, numpy, sklearn, and xgboost libraries.

    Important Note: The XGBoost version used for this project is 0.81. Using version 0.90 will result in different results.

    Project Motivation

    The purpose of this project is to continue improvement one of my projects to incorporate lessons learned and additional techinques.

    File Descriptions

    There are two folders. One for data analysis and one for machine learning. Each folder contains a copy of the data.

    The data analysis folder contains 2 python scripts. The first one is for supporting functions.

    The machine learning folder contains 4 python scripts. The first one is for supporting functions and there is one more parameter tuning, feature selection, and building the final model. This is done primarily for ease of use.

    The parameter tuning and feature selection results are available in “Results” folder.

    Results

    The final report is available as a PDF within the GitHub respository below.

    The GitHub repository in located here: here

    2019 NFL Reason Progress Results

    There will be weekly Medium posts tracking the performance on the model for the NFL 2019 season.

    Week 2
    Week 3
    Week 4
    Week 5
    Week 6
    Week 7
    Week 8
    Week 9
    Week 10
    Week 11
    Week 12
    Week 13
    Week 15

    Licensing, Authors, Acknowledgements

    NFLDB – https://github.com/BurntSushi/nfldb
    NFL Betting Data – http://www.footballlocks.com/nfl_lines.shtml
    NFL Weather Data – http://www.nflweather.com/

    Visit original content creator repository
    https://github.com/dkim319/NFL_Predictive_Model_v2

  • FlipDot-Designer

    FlipDot-Designer

    With this program you can:

    • Create displays and export as arrays:
      • Format {col1, col2, … , coln} (each colx is a bitwize representation of the column)
      • Pick where you want your origin (top-left, bottom-left)
    • Create display libraries such as numbers, alphabet, fonts, etc…
    • Works similar to MS Paint
      • Left click or drag to turn dots on
      • Right click or drag to turn dots off
    • Stream data and Export Displays over serial.
    • Works with flip dot boards of any size.

    Virus total and other scanners will may give a false flag it due to being compiled with pyinstaller.
    You can also just run it from the .py file if you feel unsafe :).

    Examples of Program:

    Menu

    Picture of menu

    Exporting to Array

    Picture of exportExample

    How does the program work?

    Picture of calculationProcedure

    Example

    Picture of example1

    • {0,0,11838,10760,14856,62,14336,8254,14378,42,14336,2110,12320,32,14336,2110,12320,32,14336,8254,63522,62,0,0,0,3584,2048,15934,32,15934,8736,15934,0,574,14370,10302,0,62,26,46,0,62,32,32,0,62,34,28,0,0,0,0,0,0,0,0,0,0,0,24830,24830,24774,24774,24774,29126,16262,7942,0,1584,1584,0,8184,16380,28686,24582,24582,24582,24582,28686,16380,8184,0,0,8184,16380,28686,24582,24582,24582,24582,28686,16380,8184,0,63488,10240,14336,0,14336,2048,14336,2048,14336,0,0,0,0,0,0,0,0,0};
    Visit original content creator repository https://github.com/tylerebowers/FlipDot-Designer
  • FlipDot-Designer

    FlipDot-Designer

    With this program you can:

    • Create displays and export as arrays:
      • Format {col1, col2, … , coln} (each colx is a bitwize representation of the column)
      • Pick where you want your origin (top-left, bottom-left)
    • Create display libraries such as numbers, alphabet, fonts, etc…
    • Works similar to MS Paint
      • Left click or drag to turn dots on
      • Right click or drag to turn dots off
    • Stream data and Export Displays over serial.
    • Works with flip dot boards of any size.

    Virus total and other scanners will may give a false flag it due to being compiled with pyinstaller.
    You can also just run it from the .py file if you feel unsafe :).

    Examples of Program:

    Menu

    Picture of menu

    Exporting to Array

    Picture of exportExample

    How does the program work?

    Picture of calculationProcedure

    Example

    Picture of example1

    • {0,0,11838,10760,14856,62,14336,8254,14378,42,14336,2110,12320,32,14336,2110,12320,32,14336,8254,63522,62,0,0,0,3584,2048,15934,32,15934,8736,15934,0,574,14370,10302,0,62,26,46,0,62,32,32,0,62,34,28,0,0,0,0,0,0,0,0,0,0,0,24830,24830,24774,24774,24774,29126,16262,7942,0,1584,1584,0,8184,16380,28686,24582,24582,24582,24582,28686,16380,8184,0,0,8184,16380,28686,24582,24582,24582,24582,28686,16380,8184,0,63488,10240,14336,0,14336,2048,14336,2048,14336,0,0,0,0,0,0,0,0,0};
    Visit original content creator repository https://github.com/tylerebowers/FlipDot-Designer
  • LavoroAgile

    Lavoro Agile

    Lavoro Agile

    Introduzione

    Lavoro Agile è una applicazione che consente di gestire gli accordi di lavoro agile per un’amministrazione pubblica.

    L’implementazione è aderente a quanto previsto dal Decreto Legge 31 dicembre 2020, n. 183 e s.m.i.

    L’applicazione si integra con:

    • il motore di Workflow open source Elsa, su cui girano i workflow di approvazione degli accordi
    • Zucchetti, da cui recupera attraverso un report custom l’anagrafica utenti
    • ZTimesheet, a cui invia le informazioni su attività e giornate di smart working
    • Ministero del Lavoro, cui vengono inviate informazioni sull’accordo di lavoro agile utili alle azioni di controllo
    • Server di email per inviare le notifiche
    • Server LDAP per autenticare gli utenti in una installazione Intranet

    L’integrazione con Zucchetti è disattivabile ed al suo posto è possibile attivare la modalità che consente di recuperare le informazioni sulle strutture direttamente dalla base dati, previo censimento delle stesse da parte dell’amministratore.

    Tutte le interazioni in uscita (ZTimesheet, Ministero, Mail) sono mediate da un sistema di code.

    A corredo dell’applicazione, sono presenti due applicazioni che consentono di definire workflow e di monitorare lo stato della coda e dei workflow.

    Sommario

    Documentazione

    Per una guida completa all’architettura ed ai flussi, fare riferimento alla documentazione di progetto

    Funzionalità

    L’applicativo prevede cinque tipologie di utenti, ognuno con proprie funzionalità specifiche. Di seguito si riportano i nomi delle figure previste e le principali funzionalità ad esse associate.

    • L’Amministratore, identificato dal ruolo Administrator, può:

      • Creare utenze associandogli eventualmente il ruolo di amministratore
      • Creare strutture. Nel caso in cui sia attiva l’integrazione con Zucchetti, sarà possibile solamente creare strutture di primo livello ed associare alle stesse il referente interno; nel caso in cui sia attiva la modalità struttura su database, sarà anche possibile censire gli altri due livelli, nonché impostare le informazioni sui responsabili di ogni livello.
      • Gestire i componenti della segreteria tecnica
      • Attuare delle remediation sugli accordi (riportarli in uno stato precedente, eliminarli, eliminarne la valutazione, …)
    • Il componente della Segreteria Tecnica, identificato da un utente che è stato censito dall’amministratore come componente della Segreteria tecnica, può:

      • Ricercare accordi per tutti i dipendenti dell’amministrazione
      • Intervenire nel flusso di valutazione dell’accordo attraverso l’inserimento di note al dipendente
      • Consultare un cruscotto di monitoraggio che consente di ottenere delle statistiche sui dati presenti in piattaforma (ad esempio numero di accordi attivi, media delle giornate di lavoro agile per accordo, …)
    • Il Referente interno, identificato da un utente che è stato impostato come referente tecnico per almeno una stuttura, può ricercare accordi per le strutture cui è assegnato. Lo scopo di questo ruolo è quello di poter supportare i dipendenti nelle fasi di definizione, sottoscrizione e valutazione dell’accordo

    • Il Responsabile dell’accordo, identificato da un utente che è stato impostato come responsabile dell’accordo per almeno una struttura, può:

      • Ricercare accordi trasmessi dai suoi sottoposti
      • Approvare, rifiutare e chiedere integrazioni per un accordo trasmesso da un sottoposto
      • Valutare un accordo trasmesso da un sottoposto
    • L’utente, ovvero un qualsiasi utente dell’applicativo. E’ importante notare che tutte le figure sopra riportate sono riconosciuti anche come utenti. Accedendo quindi all’applicativo, anche le altre figure saranno in grado di effettuare le operazioni riservate ad un utente normale:

      • Definire e sottoscrivere un accordo di lavoro agile
      • Ricercare i propri accordi
      • Visualizzare i dettagli dei propri accordi precedenti
      • Visualizzare i dettagli dell’accordo in essere
      • Recedere da un accordo
      • Inviare richiesta di rinnovo per un accordo in corso
      • Inviare richiesta di revisione per un accordo in corso
      • Visualizzare lo storico delle fasi attraversate da un accordo
      • Navigare nella storia degli accordi
      • Compilare l’auto valutazione da poter inviare in approvazione al proprio responsabile

    Roadmap

    Le seguenti funzionalità sono pianificate per le prossime release di Lavoro Agile:

    • Possibilità di chiedere il reset della password
    • Possibilità di impostare una password per i componenti della segreteria tecnica
    • Aggiornamento della versione del motore di Workflow Elsa
    • Possibilità di utilizzare un workflow gestito da codice invece che dalla libreria di workflow Elsa
    • Possibilità di personalizzare il nome dell’amministrazione editando il file di configurazione
    • Revisione dell’interfaccia utente per migliorarne la UX
    • Aggiunta del supporto per inviare Email tramite le API di Graph

    Come contribuire

    I contribuiti atti al miglioramento del software sono sempre i benvenuti. Di seguito le regole per poter contribuire:

    1. Fork and Clone del repository

    Per inziare, è necessario creare un fork del repository verso il proprio account GitHub. Per creare un fork, dalla pagina iniziale del repository, cliccare sul pulsante “Fork” nella barra di comando in alto. Una volta creato il fork, si potrà procedere con il clone del repository nella propria macchina attraverso il comando:

    git clone https://github.com/YOUR_USERNAME/lavoro-agile.git

    Sostituire YOUR_USERNAME con la propria username GitHub. Per ulteriori informazioni su come effettuare un fork, consultare la documentazione ufficiale GitHub qui.

    2. Aprire PCM-LavoroAgile.sln utilizzando l’IDE preferito

    Aprire la cartella in cui è stato clonato il repository e quindi aprire il file di solution PCM-LavoroAgile.sln con l’IDE preferito. L’importante è che l’IDE supporti lo sviluppo .NET 8. E’ possibile ad esempio utilizzare Visual Studio, JetBrains Rider, o Visual Studio Code con le appropriate estensioni.

    Aprendo la soluzione nella cartella src saranno presenti tre applicazioni web:

    • PCM-LavoroAgile, l’applicazione principale che consente di avviare il sistema di gestione degli accordi di lavoro agile
    • PCM-MonitoringDashboard, l’applicazione che consente di avviare la dasboard di monitoraggio dei workflow e delle code
    • PCM-WorkflowDefinition, l’applicazione che consente di definire workflow da eseguire sul motore di workflow Elsa (fare riferimento alla documentazione per informazioni su come pubblicare nuovi flussi o aggiornamenti al flusso di approvazione)

    Le web app sono configurate per facilitare al massimo lo sviluppo, quindi non dipendono da software installato su macchine diverse da quella su cui si vuole sviluppare ed utilizzano istanze locali, eventualmente dockerizzate, di SQLServer e del server Mail. Inoltre l’app principale (PCM-LavoroAgile) è configurata con l’integrazione Zucchetti spenta e per utilizzare le strutture prelevate dal database.

    Per SQLServer ed il server di mail, il consiglio è di installarsi Docker Desktop o Podman Desktop ed istanziarsi due container a partire dalle immagini di:

    • rnwood/smtp4dev, un server e-mail fake
    • mcr.microsoft.com/mssql/server, la versione contenerizzata ufficiale Microsoft, di SQLServer nella versione 2022

    Se si utilizza Visual Studio e si seleziona il profilo di esecuzione IIS Express, le app sono configurate:

    • PCM-LavoroAgile per rispondere alla url https://localhost:44380/
    • PCM-MonitoringDashboard per rispondere alla url https://localhost:44318/
    • PCM-WorkflowDefinition per rispondere alla url https://localhost:44304/

    Se si utilizza una configurazione diversa da quella fin qui descritta, prima di avviare per la prima volta le applicazioni è necessario effettuare alcune modifiche nei file di configurazione.

    Per l’applicazione principale (PCM-LavoroAgile), bisogna agire sul file appsettings.json/appsettings.Development.json di questo progetto (di cui si trova una descrizione completa nella documentazione di progetto). Sono da modificare:

    • Le connessioni al database, presenti nelle chiavi di configurazione ConnectionStrings:DefaultConnection e ConnectionStrings:CAPConnection, che dovranno essere modificate per puntare al proprio database server (Lavoro Agile utilizza SQLServer, ma si è liberi di utilizzare qualunque database supportato da Entity Framework modificando il codice di configurazione del database provider presente nel metodo AddDbContext del file di estensione StartupExtensions)
    • MailSettings, da configure con le coordinate del server si posta elettronica
    • AllowedOrigins, da modificare con la base url dell’app di monitoring (PCM-MonitoringDashboard)
    • Elsa:Server:BaseUrl, da modificare con la base url dell’app principale (PCM-LavoroAgile)

    Ci sono poi tre configurazioni che possono essere utili:

    • StruttureService, il cui valore può essere sostituito con Infrastructure.Services.ZucchettiStruttureService, Infrastructure se si vuole utilizzare l’integrazione con i servizi di Zucchetti (molto probabilmente sarà necessario prima adattare il connettore per farlo funzionare con la propria istanza Zucchetti in quanto i servizi di recupero dell’anagrafica e delle giornate di smart working sono servizi custom definiti su Zucchetti)
    • MigrationJobEnabled, che consente di abilitare il job che inizializza il database al primo avvio dell’applicazione. Se si preferisce eseguire la preparazione del database in autonomia (ad esempio perché non si hanno i permessi di admin sul database), mettere questa chiave a false ed eseguire in qualsiasi ordine gli script presenti nella folder scripts (sono script idempotenti quindi non è un problema se dovessero essere accidentalmente eseguiti più volte)
    • AdminUser, che contiene username e password dell’utente Admin creato dal sistema durante la prima esecuzione o dagli script se si è preferito inizializzare il database a mano. Username e password riportati in questa configurazione serviranno per accedere all’applicazione la prima volta.

    Per quanto riguarda invece l’applicazione PCM-MonitoringDashboard, prima di avviarla per la prima volta, è necessario dare un’occhiata al file appsettings.json/appsettings.Development.json ed in particolare controllare ed eventualmente modificare le chiavi:

    • ConnectionStrings:CAPConnection, da modificare per far puntare l’app al proprio database
    • Elsa:Server:BaseAddress, per impostare la base url dell’app principale (PCM-LavoroAgile).

    Per evitare l’accidentale push di credenziali o informazioni sensibili, si consiglia di non inserire queste informazioni direttamente nei file di config ma di utilizzare il Secret Manager di .NET. E’ possibile accedere al manager:

    • Da Visual Studio con il tasto destro del mouse sul progetto e quindi cliccando su Manage User Secrets
    • Da linea di comando. In questo caso per prima cosa è necessario aprire un prompt, posizionarsi nella cartella dell’app da configurare e quindi inizializzare il manager con il comando dotnet user-secrets init e quindi aggiungere voci con il comando dotnet user-secrets set "Chiave:Sottochiave" "Valore".

    E’ possibile trovare maggiori informazioni sul Secret Manager qui.

    Consultare la documentazione di progetto per ottenere informazioni su come partire con una nuova installazione.

    Se si apportano modifiche ad almeno uno dei context, è necessario generare i file di migration ed aggiornare i file sql presenti nella cartella scripts.

    L’applicazione si compone di tre context:

    • IdentityContext, dedicato alla parte di identity
    • StrutturaContext, dedicato alla parte di gestione delle strutture
    • AccordoContext, dedicato alla parte di gestione degli accordi

    Visual Studio rende semplici e supportate da GUI le operazioni di compilazione, generazione dei pacchetti di rilascio e generazione dei file di migrazione. Nel caso in cui si voglia procedere con la .NET 8 command line interface, di seguito si riportano le indicazioni per compilare le tre applicazioni.

    1. Posizionarsi nella cartella dell’applicazione da compilare / rilasciare
    2. Compilare eseguendo il comando dotnet build --runtime win-x64. Consultare la documentazione riportata qui per maggiori informazioni sul comando di build.
    3. Creare il pacchetto eseguendo il comando dotnet publish --output 'build' --self-contained true --runtime win-x64. Consultare la documentazione riportata qui per maggiori informazioni sul comando di publish.

    Nei comandi:

    • --self-contained valorizzato a true, consente di inglobare nel pacchetto i runtime del framework: può essere omesso se si vuole che l’applicazione utilizzi la framework installata a sistema. In questo caso sarà preventivamente necessario installare sulla macchina il runtime di .NET scaricabile da qui (si precisa a tal proposito che la framework core è multipiattaforma e di conseguenza potrà essere installata su Windows, macOS e Linux).
    • --runtime specifica il runtime da “includere” nel pacchetto (negli esempi win-x64 includerà il runtime per Windows 64 bit, qui è possibile consultare l’elenco dei runtime identifier utilizzabili).

    Il pacchetto generato dal comando di publish è quello che deve essere pubblicato sull’application server di riferimento. Fare riferimento alla documentazione ufficiale dell’application server per ottenere informazioni sull’installazione di un’applicazione .NET Core 8.

    Le migrazioni vanno eseguite posizionandosi nella cartella dell’app principale (PCM-LavoroAgile). La prima volta che si vuole generare una migrazione, è necessario fare il restore del tool lanciando il comando

    dotnet tool restore

    Questo installerà la versione 8.0.6 del tool dotnet-ef.

    La tabella seguente riporta, per ogni context i comandi da lanciare per generare il nuovo file migrazione ed aggiornare il file di script.

    Context Migration Script
    IdentityContext dotnet ef migrations add XXXX --context identitycontext --output-dir 'Migrations/Identity' dotnet ef migrations script --context identitycontext --idempotent --output ..\..\scripts\identity.sql
    StrutturaContext dotnet ef migrations add XXXX --context strutturacontext --output-dir 'Migrations/Struttura' dotnet ef migrations script --context strutturacontext --idempotent --output ..\..\scripts\struttura.sql
    AccordoContext dotnet ef migrations add XXXX --context accordocontext --output-dir 'Migrations/Accordo' dotnet ef migrations script --context accordocontext --idempotent --output ..\..\scripts\accordo.sql

    3. Sottomettere una PR con le proprie modifiche

    Quando si sono completate le modifiche al codice e si è pronti per rilasciarlo, effettuare la push del codice verso il proprio fork e quindi dall’interfaccia di GitHub, effettuare una pull request verso il repository ufficiale. Cercare di fornire più informazioni possibili per aiutare i revisori nel compito di verifica del codice. Per maggiori informazioni visitare la pagina ufficiale di GitHub Creare una pull request da un fork.

    Visit original content creator repository https://github.com/italia/LavoroAgile