Author: 2ouga2kawu0b

  • aks-java-petclinic-mic-srv

    Distributed version of the Spring PetClinic Sample Application deployed to AKS


    page_type: sample languages:

    • java products:
    • Azure Kubernetes Service description: “Deploy Spring Boot apps using AKS & MySQL” urlFragment: “spring-petclinic-microservices”

    Build Status License

    UI Build Status License

    Pre-req Deployment status License

    IaC Deployment status License

    This microservices branch was initially derived from AngularJS version to demonstrate how to split sample Spring application into microservices. To achieve that goal we use IaC with Azure Bicep, MS build of OpenJDK 11, GitHub Actions, Azure AD Workload Identity, Azure Key Vault, Azure Container Registry, Azure Database for MySQL

    See :

    Pre-req

    To get an Azure subscription:

    • If you have a Visual studio subscription then you can activate your free credits here
    • If you do not currently have one, you can sign up for a free trial subscription here

    To install Azure Bicep locally, read https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/install

    CI/CD

    Use GitHub Actions to deploy the Java microservices

    About how to build the container image, read :

    Read :

    You have to specify some KV secrets that will be then created in the GitHub Action Azure Infra services deployment workflow :

    • SPRING-DATASOURCE-PASSWORD
    • SPRING-CLOUD-AZURE-TENANT-ID
    • VM-ADMIN-PASSWORD

    dash ‘-‘ are not supported in GH secrets, so the secrets must be named in GH with underscore ‘_’.

    (Also the ‘&’ character in the SPRING_DATASOURCE_URL must be escaped with ‘&’ jdbc:mysql://petcliaks777.mysql.database.azure.com:3306/petclinic?useSSL=true&requireSSL=true&enabledTLSProtocols=TLSv1.2&verifyServerCertificate=true)

    Add the App secrets used by the Spring Config to your GH repo secrets / Actions secrets / Repository secrets / Add :

    Secret Name Secret Value example
    SPRING_DATASOURCE_PASSWORD PUT YOUR PASSWORD HERE
    SPRING_CLOUD_AZURE_TENANT_ID PUT YOUR AZURE TENANT ID HERE
    VM_ADMIN_PASSWORD PUT YOUR PASSWORD HERE
    LOCATION="westeurope"
    RG_KV="rg-iac-kv33"
    RG_APP="rg-iac-aks-petclinic-mic-srv"
    
    az group create --name $RG_KV --location $LOCATION
    az group create --name $RG_APP --location $LOCATION

    A Service Principal is required for GitHub Action Runner, read https://aka.ms/azadsp-cli

    SPN_APP_NAME="gha_aks_run"
    
    # /!\ In CloudShell, the default subscription is not always the one you thought ...
    subName="set here the name of your subscription"
    subName=$(az account list --query "[?name=='${subName}'].{name:name}" --output tsv)
    echo "subscription Name :" $subName
    
    SUBSCRIPTION_ID=$(az account list --query "[?name=='${subName}'].{id:id}" --output tsv)
    SUBSCRIPTION_ID=$(az account show --query id -o tsv)
    TENANT_ID=$(az account show --query tenantId -o tsv)

    Add your AZURE_SUBSCRIPTION_ID, AZURE_TENANT_ID to your GH repo Settings / Security / Secrets and variables / Actions / Actions secrets / Repository secrets

    Read :

    In the GitHub Action Runner, to allow the Service Principal used to access the Key Vault, execute the command below:

    #az ad app create --display-name $SPN_APP_NAME > aad_app.json
    # This command will output JSON with an appId that is your client-id. The objectId is APPLICATION-OBJECT-ID and it will be used for creating federated credentials with Graph API calls.
    
    #export APPLICATION_ID=$(cat aad_app.json | jq -r '.appId')
    #export APPLICATION_OBJECT_ID=$(cat aad_app.json | jq -r '.id')
    #az ad sp create --id $APPLICATION_ID
    
    #export CREDENTIAL_NAME="gha_aks_run"
    #export SUBJECT="repo:ezYakaEagle442/aks-java-petclinic-mic-srv:environment:PoC" # "repo:organization/repository:environment:Production"
    #export DESCRIPTION="GitHub Action Runner for Petclinic AKS demo"
    
    #az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/$APPLICATION_OBJECT_ID/federatedIdentityCredentials' --body '{"name":"$CREDENTIAL_NAME","issuer":"https://token.actions.githubusercontent.com","subject":"$SUBJECT","description":"$DESCRIPTION","audiences":["api://AzureADTokenExchange"]}'
    
    # SPN_PWD=$(az ad sp create-for-rbac --name $SPN_APP_NAME --skip-assignment --query password --output tsv)
    az ad sp create-for-rbac --name $SPN_APP_NAME --skip-assignment --sdk-auth
    {
      "clientId": "<GUID>",
      "clientSecret": "<GUID>",
      "subscriptionId": "<GUID>",
      "tenantId": "<GUID>",
      "activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
      "resourceManagerEndpointUrl": "https://management.azure.com/",
      "activeDirectoryGraphResourceId": "https://graph.windows.net/",
      "sqlManagementEndpointUrl": "https://management.core.windows.net:8443/",
      "galleryEndpointUrl": "https://gallery.azure.com/",
      "managementEndpointUrl": "https://management.core.windows.net/"
    }

    Troubleshoot: If you hit “Error: : No subscriptions found for ***.” , this is related to an IAM privilege in the subscription.

    SPN_APP_ID=$(az ad sp list --all --query "[?appDisplayName=='${SPN_APP_NAME}'].{appId:appId}" --output tsv)
    #SPN_APP_ID=$(az ad sp list --show-mine --query "[?appDisplayName=='${SPN_APP_NAME}'].{appId:appId}" --output tsv)
    # TENANT_ID=$(az ad sp list --show-mine --query "[?appDisplayName=='${SPN_APP_NAME}'].{t:appOwnerOrganizationId}" --output tsv)
    
    # Enterprise Application
    az ad app list --show-mine --query "[?displayName=='${SPN_APP_NAME}'].{objectId:id}"
    az ad app show --id $SPN_APP_ID
    
    # This is the unique ID of the Service Principal object associated with this application.
    # SPN_OBJECT_ID=$(az ad sp list --show-mine --query "[?displayName=='${SPN_APP_NAME}'].{objectId:id}" -o tsv)
    SPN_OBJECT_ID=$(az ad sp list --all --query "[?displayName=='${SPN_APP_NAME}'].{objectId:id}" -o tsv)
    
    az ad sp show --id $SPN_OBJECT_ID
    
    # the assignee is an appId
    az role assignment create --assignee $SPN_APP_ID --scope /subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${RG_KV} --role contributor
    
    az role assignment create --assignee $SPN_OBJECT_ID --scope /subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${RG_KV} --role contributor
    
    # https://learn.microsoft.com/en-us/azure/key-vault/general/rbac-guide?tabs=azure-cli#azure-built-in-roles-for-key-vault-data-plane-operations
    
    # "Key Vault Secrets User"
    az role assignment create --assignee $SPN_APP_ID --scope /subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${RG_KV} --role 4633458b-17de-408a-b874-0445c86b69e6
    
    az role assignment create --assignee $SPN_OBJECT_ID --scope /subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${RG_KV} --role 4633458b-17de-408a-b874-0445c86b69e6
    
    # "Key Vault Secrets Officer"
    az role assignment create --assignee $SPN_APP_ID --scope /subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${RG_KV} --role b86a8fe4-44ce-4948-aee5-eccb2c155cd7
    
    az role assignment create --assignee $SPN_OBJECT_ID --scope /subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${RG_KV} --role b86a8fe4-44ce-4948-aee5-eccb2c155cd7
    
    # "DNS Zone Contributor"
    # https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#dns-zone-contributor
    az role assignment create --assignee $SPN_APP_ID --scope /subscriptions/${SUBSCRIPTION_ID} --role befefa01-2a29-4197-83a8-272ff33ce314
    az role assignment create --assignee $SPN_OBJECT_ID --scope /subscriptions/${SUBSCRIPTION_ID} --role befefa01-2a29-4197-83a8-272ff33ce314
    
    # https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#virtual-machine-contributor
    # Virtual Machine Contributor has permission 'Microsoft.Network/publicIPAddresses/read'
    #az role assignment create --assignee $SPN_APP_ID --scope /subscriptions/${SUBSCRIPTION_ID} --role 9980e02c-c2be-4d73-94e8-173b1dc7cf3c
    #az role assignment create --assignee $SPN_OBJECT_ID --scope /subscriptions/${SUBSCRIPTION_ID} --role 9980e02c-c2be-4d73-94e8-173b1dc7cf3c
    
    # Network-contributor: https://learn.microsoft.com/en-us/azure/role-based-access-control/resource-provider-operations#microsoftnetwork
    az role assignment create --assignee $SPN_APP_ID --scope /subscriptions/${SUBSCRIPTION_ID} --role 4d97b98b-1d4f-4787-a291-c67834d212e7
    az role assignment create --assignee $SPN_OBJECT_ID --scope /subscriptions/${SUBSCRIPTION_ID} --role 4d97b98b-1d4f-4787-a291-c67834d212e7
    
    # https://learn.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal#prerequisites
    # /!\ To assign Azure roles, you must have: requires to have Microsoft.Authorization/roleAssignments/write and Microsoft.Authorization/roleAssignments/delete permissions, 
    # such as User Access Administrator or Owner.
    az role assignment create --assignee $SPN_APP_ID --scope /subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${RG_KV} --role Owner
    az role assignment create --assignee $SPN_APP_ID --scope /subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${RG_APP} --role Owner
    
    az role assignment create --assignee $SPN_OBJECT_ID --scope /subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${RG_KV} --role Owner
    az role assignment create --assignee $SPN_OBJECT_ID --scope /subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${RG_APP} --role Owner
    

    **RBAC Permission model is set on KV, the pre-req requires to have Microsoft.Authorization/roleAssignments/write and Microsoft.Authorization/roleAssignments/delete permissions, such as User Access Administrator or Owner.

    https://learn.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal#prerequisites To assign Azure roles, you must have: requires to have Microsoft.Authorization/roleAssignments/write and Microsoft.Authorization/roleAssignments/delete permissions, such as User Access Administrator or Owner. **

    “Key Vault Secrets User” built-in role read secret contents including secret portion of a certificate with private key. Only works for key vaults that use the ‘Azure role-based access control’ permission model.

    Read :

    Paste in your JSON object for your service principal with the name AZURE_CREDENTIALS as secrets to your GH repo Settings / Security / Secrets and variables / Actions / Actions secrets / Repository secrets

    You can test your connection with CLI :

    az login --service-principal -u $SPN_APP_ID -p $SPN_PWD --tenant $TENANT_ID

    Add SUBSCRIPTION_ID, TENANT_ID, SPN_APP_ID and SPN_PWD as secrets to your GH repo Settings / Security / Secrets and variables / Actions / Actions secrets / Repository secrets

    Finally Create a GH PAT “PKG_PAT” that can be use to , publish packages and delete packages

    Your GitHub personal access token needs to have the workflow scope selected. You need at least delete:packages and read:packages scopes to delete a package. You need contents: read and packages: write permissions to publish and download artifacts

    Create SSH Keys, WITHOUT any passphrase (type enter if prompt)

    # https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.resources/deployment-script-ssh-key-gen/new-key.sh
    export ssh_key=aksadm
    echo -e 'y' | ssh-keygen -t rsa -b 4096 -f ~/.ssh/$ssh_key -C "youremail@groland.grd" # -N $ssh_passphrase
    # test
    # ssh -i ~/.ssh/$ssh_key $admin_username@$network_interface_pub_ip

    Add $ssh_key & $ssh_key.pub as secrets SSH_PRV_KEY & SSH_PUB_KEY to your GH repo Settings / Security / Secrets and variables / Actions / Actions secrets / Repository secrets

    To avoid to hit the error below :

    "The subscription is not registered to use namespace 'Microsoft.KeyVault'. See https://aka.ms/rps-not-found for how to register subscriptions.\",\r\n    \"details\": [\r\n      ***\r\n        \"code\": \"MissingSubscriptionRegistration\"

    Read the docs Just run :

    az feature list --output table --namespace Microsoft.ContainerService
    az feature register --namespace "Microsoft.ContainerService" --name "AKS-GitOps"
    az feature register --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview"
    az feature register --namespace "Microsoft.ContainerService" --name "AKS-Dapr"
    az feature register --namespace "Microsoft.ContainerService" --name "EnableAzureKeyvaultSecretsProvider"
    az feature register --namespace "Microsoft.ContainerService" --name "AKS-AzureDefender"
    az feature register --namespace "Microsoft.ContainerService" --name "AKS-PrometheusAddonPreview" 
    az feature register --namespace "Microsoft.ContainerService" --name "AutoUpgradePreview"
    az feature register --namespace "Microsoft.ContainerService" --name "AKS-OMSAppMonitoring"
    az feature register --namespace "Microsoft.ContainerService" --name "ManagedCluster"
    az feature register --namespace "Microsoft.ContainerService" --name "AKS-AzurePolicyAutoApprove"
    az feature register --namespace "Microsoft.ContainerService" --name "FleetResourcePreview"
    
    az provider list --output table
    az provider list --query "[?registrationState=='Registered']" --output table
    az provider list --query "[?namespace=='Microsoft.KeyVault']" --output table
    az provider list --query "[?namespace=='Microsoft.OperationsManagement']" --output table
    
    az provider register --namespace Microsoft.KeyVault
    az provider register --namespace Microsoft.ContainerRegistry
    az provider register --namespace Microsoft.ContainerService
    az provider register --namespace Microsoft.OperationalInsights 
    az provider register --namespace Microsoft.DBforMySQL
    az provider register --namespace Microsoft.DBforPostgreSQL
    az provider register --namespace Microsoft.Compute 
    az provider register --namespace Microsoft.AppConfiguration       
    az provider register --namespace Microsoft.AppPlatform
    az provider register --namespace Microsoft.EventHub  
    az provider register --namespace Microsoft.Kubernetes 
    az provider register --namespace Microsoft.KubernetesConfiguration
    az provider register --namespace Microsoft.Kusto  
    az provider register --namespace Microsoft.ManagedIdentity
    az provider register --namespace Microsoft.Monitor
    az provider register --namespace Microsoft.OperationsManagement
    az provider register --namespace Microsoft.Network  
    c
    az provider register --namespace Microsoft.ServiceBus
    az provider register --namespace Microsoft.Storage
    az provider register --namespace Microsoft.Subscription
    
    # https://learn.microsoft.com/en-us/azure/aks/cluster-extensions
    az extension add --name k8s-extension
    az extension update --name k8s-extension
    
    # https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/tutorial-use-gitops-flux2?
    az extension add -n k8s-configuration
    

    Read https://azure.github.io/azure-workload-identity/docs/installation/azwi.html

    Install Azure AD Workload Identity CLI

    AAD_WI_CLI_VERSION=1.0.0
    wget https://github.com/Azure/azure-workload-identity/releases/download/v$AAD_WI_CLI_VERSION/azwi-v$AAD_WI_CLI_VERSION-linux-amd64.tar.gz
    gunzip azwi-v$AAD_WI_CLI_VERSION-linux-amd64.tar.gz
    tar -xvf azwi-v$AAD_WI_CLI_VERSION-linux-amd64.tar
    ./azwi version
    

    Pipelines

    See GitHub Actions :

    ****

    Workflow Design

    The Workflow run the steps in this in this order :

    ├── Deploy the Azure Infra services workflow ./.github/workflows/deploy-iac.yml
    │   ├── Trigger the pre-req ./.github/workflows/deploy-iac.yml#L75
    │       ├── Create Azure Key Vault ./.github/workflows/deploy-iac-pre-req.yml#L108
    │       ├── Authorize local IP to access the Azure Key Vault ./.github/workflows/deploy-iac-pre-req.yml#L115
    │       ├── Create the secrets ./.github/workflows/deploy-iac-pre-req.yml#L121
    │       ├── Disable local IP access to the Key Vault ./.github/workflows/deploy-iac-pre-req.yml#L152
    │       ├── Deploy the pre-req ./.github/workflows/deploy-iac-pre-req.yml#L180
    │           ├── Create Log Analytics Workspace ./iac/bicep/pre-req.bicep#L68
    │           ├── Create appInsights  ./iac/bicep/pre-req.bicep#L68
    │           ├── Create ACR ./iac/bicep/pre-req.bicep#L104
    │           ├── Create Identities ./iac/bicep/pre-req.bicep#L124
    │           ├── Create VNet ./iac/bicep/pre-req.bicep#L135
    │           ├── Create roleAssignments ./iac/bicep/pre-req.bicep#L155
    │           ├── Create MySQL ./iac/bicep/pre-req.bicep#L174
    │   ├── Deploy AKS ./iac/bicep/main.bicep
    │       ├── Call AKS module ./iac/bicep/main.bicep#L95
    │       ├── Whitelist AKS Env. OutboundIP to KV and MySQL ./.github/workflows/deploy-iac.yml#L119
    │       ├── Call DB data loading Init ./.github/workflows/deploy-iac.yml#L154
    │       ├── Call Maven Build ./.github/workflows/deploy-iac.yml#L159
    │       ├── Maven Build ./.github/workflows/maven-build.yml#L128
    │           ├── Publish the Maven package ./.github/workflows/maven-build.yml#L176
    │           ├── Build image and push it to ACR ./.github/workflows/maven-build.yml#L241
    │       ├── Call Maven Build-UI ./.github/workflows/deploy-iac.yml#L166
    │           ├── Build image and push it to ACR ./.github/workflows/maven-build-ui.yml#L191
    │       ├── Deploy Backend Services ./.github/workflows/deploy-iac.yml#L185
    │           ├── Deploy Backend services calling ./.github/workflows/deploy-app-svc.yml
    │           ├── Deploy the UI calling ./.github/workflows/deploy-app-ui.yml
    

    You need to set your own param values in :

    env:
      APP_NAME: petcliaks
      LOCATION: westeurope # francecentral
      RG_KV: rg-iac-kv33 # RG where to deploy KV
      RG_APP: rg-iac-aks-petclinic-mic-srv # RG where to deploy the other Azure services: AKS, ACR, MySQL, etc.
      
      ACR_NAME: acrpetcliaks
    
      VNET_NAME: vnet-aks
      VNET_CIDR: 172.16.0.0/16
      AKS_SUBNET_CIDR: 172.16.1.0/24
      AKS_SUBNET_NAME: snet-aks
    
      START_IP_ADRESS: 172.16.1.0
      END_IP_ADRESS: 172.16.1.255
    
      MYSQL_SERVER_NAME: petcliaks
      MYSQL_DB_NAME: petclinic
      MYSQL_ADM_USR: mys_adm
      MYSQL_TIME_ZONE: Europe/Paris
      MYSQL_CHARACTER_SET: utf8
      MYSQL_PORT: 3306
    
      DEPLOY_TO_VNET: false
    
      KV_NAME: kv-petcliaks33 # The name of the KV, must be UNIQUE. A vault name must be between 3-24 alphanumeric characters
    
      # https://learn.microsoft.com/en-us/azure/key-vault/secrets/secrets-best-practices#secrets-rotation
      # Because secrets are sensitive to leakage or exposure, it's important to rotate them often, at least every 60 days. 
      # Expiry date in seconds since 1970-01-01T00:00:00Z. Ex: 1672444800 ==> 31/12/2022'
      SECRET_EXPIRY_DATE: 1703980800 # ==> 31/12/2023
      AZURE_CONTAINER_REGISTRY: acrpetcliaks # The name of the ACR, must be UNIQUE. The name must contain only alphanumeric characters, be globally unique, and between 5 and 50 characters in length.
      REGISTRY_URL: acrpetcliaks.azurecr.io  # set this to the URL of your registry
      REPOSITORY: petclinic                  # set this to your ACR repository
      PROJECT_NAME: petclinic                # set this to your project's name
    
      KV_NAME: kv-petcliaks33 # The name of the KV, must be UNIQUE. A vault name must be between 3-24 alphanumeric characters
      RG_KV: rg-iac-kv33 # RG where to deploy KV
      RG_APP: rg-iac-aks-petclinic-mic-srv # RG where to deploy the other Azure services: AKS, ACR, MySQL, etc.
    
      # ==== Azure storage to store Artifacts , values must be consistent with the ones in storage.bicep ====:
      AZ_STORAGE_NAME : stakspetcliaks # customize this
      AZ_BLOB_CONTAINER_NAME: petcliaks-blob # customize this
      AZURE_CONTAINER_REGISTRY: acrpetcliaks # The name of the ACR, must be UNIQUE. The name must contain only alphanumeric characters, be globally unique, and between 5 and 50 characters in length.
      REGISTRY_URL: acrpetcliaks.azurecr.io  # set this to the URL of your registry
      REPOSITORY: petclinic                  # set this to your ACR repository
      PROJECT_NAME: petclinic                # set this to your project's name
    
      KV_NAME: kv-petcliaks33 # The name of the KV, must be UNIQUE. A vault name must be between 3-24 alphanumeric characters
      RG_KV: rg-iac-kv33 # RG where to deploy KV
      RG_APP: rg-iac-aks-petclinic-mic-srv # RG where to deploy the other Azure services: AKS, ACR, MySQL, etc.
    
      # ==== Azure storage to store Artifacts , values must be consistent with the ones in storage.bicep ====:
      AZ_STORAGE_NAME : stakspetcliaks # customize this
      AZ_BLOB_CONTAINER_NAME: petcliaks-blob # customize this

    Once you commit, then push your code update to your repo, it will trigger a Maven build which you need to can CANCELL from https://github.com/USERNAME/aks-java-petclinic-mic-srv/actions/workflows/maven-build.yml the first time you trigger the workflow, anyway it will fail because the ACR does not exist yet and the docker build will fail to push the Images.

    Note: the GH Hosted Runner / Ubuntu latest image has already Azure CLI installed

    Deploy AKS and the petclinic microservices Apps with IaC

    You can read the Bicep section but you do not have to run it through CLI, instead you can manually trigger the GitHub Action deploy-iac.yml, see the Workflow in the next section

    AKS has dependencies on services outside of that virtual network. For a list of these dependencies see the AKS doc

    Troubleshoot : If the AKS cluster was provisionned in a FAILED state, try :

    az resource update --name $ClusterName --resource-group $RgName --resource-type Microsoft.ContainerService/managedClusters --debug
    az resource show --name $ClusterName --resource-group $RgName --resource-type Microsoft.ContainerService/managedClusters --debug

    Security

    secret Management

    Azure Key Vault integration is implemented through Spring Cloud for Azure

    Read :

    The Config-server does use the config declared on the repo at https://github.com/ezYakaEagle442/aks-cfg-srv/blob/main/application.yml and uses a User-Assigned Managed Identity to be able to read secrets from KeyVault.

    If you face any issue, see the troubleshoot section

    Starting services locally without Docker

    Quick local test just to verify that the jar files can be run (the routing will not work out of a K8S cluster, and also the apps will fail to start as soon as management port 8081 will be already in use by config server …) :

    /!\ IMPORTANT WARNING: projects must be built with -Denv=cloud EXCEPT for api-gateway

     mvn clean package -DskipTests -Denv=azure
    java -jar spring-petclinic-config-server\target\spring-petclinic-config-server-2.6.13.jar --server.port=8888
    java -jar spring-petclinic-admin-server\target\spring-petclinic-admin-server-2.6.13.jar --server.port=9090
    java -jar spring-petclinic-visits-service\target\spring-petclinic-visits-service-2.6.13.jar --server.port=8082 # --spring.profiles.active=docker
    java -jar spring-petclinic-vets-service\target\spring-petclinic-vets-service-2.6.13.jar --server.port=8083
    java -jar spring-petclinic-customers-service\target\spring-petclinic-customers-service-2.6.13.jar --server.port=8084
    java -jar spring-petclinic-api-gateway\target\spring-petclinic-api-gateway-2.6.13.jar --server.port=8085

    Note: tip to verify the dependencies

    mvn dependency:tree
    mvn dependency:analyze-duplicate

    To learn more about maven, read :

    Every microservice is a Spring Boot application and can be started locally. Please note that supporting services (Config Server) must be started before any other application (Customers, Vets, Visits and API). Startup Admin server is optional. If everything goes well, you can access the following services at given location:

    The main branch uses an MS openjdk/jdk:11-mariner Docker base.

    #acr_usr=$(az deployment group show -g ${{ env.RG_APP }} -n ${{ env.AZURE_CONTAINER_REGISTRY }} --query properties.outputs.acrRegistryUsr.value | tr -d '"')
    #acr_pwd=$(az deployment group show -g ${{ env.RG_APP }} -n ${{ env.AZURE_CONTAINER_REGISTRY }} --query properties.outputs.acrRegistryPwd.value | tr -d '"')
    #az acr login --name ${{ env.REGISTRY_URL }} -u $acr_usr -p $acr_pwd
    
    set -euo pipefail
    access_token=$(az account get-access-token --query accessToken -o tsv)
    
    refresh_token=$(curl https://${{ env.REGISTRY_URL }}/oauth2/exchange -v -d "grant_type=access_token&service=${{ env.REGISTRY_URL }}&access_token=$access_token" | jq -r .refresh_token)
    
    refresh_token=$(curl https://acrpetcliaks.azurecr.io/oauth2/exchange -v -d "grant_type=access_token&service=acrpetcliaks.azurecr.io&access_token=$access_token" | jq -r .refresh_token)
    
    # docker login ${{ env.REGISTRY_URL }} -u 00000000-0000-0000-0000-000000000000 --password-stdin <<< "$refresh_token"
    
    docker build --build-arg --no-cache -t "petclinic-admin-server" -f "./docker/petclinic-admin-server/Dockerfile" .
    docker tag petclinic-admin-server acrpetcliaks.azurecr.io/petclinic/petclinic-admin-server
    az acr login --name acrpetcliaks.azurecr.io -u $acr_usr -p $acr_pwd
    az acr build --registry acrpetcliaks -g  rg-iac-aks-petclinic-mic-srv  -t petclinic/adm-test:test --file "./docker/petclinic-admin-server/Dockerfile" .
    docker push acrpetcliaks.azurecr.io/petclinic/petclinic-admin-server
    docker pull acrpetcliaks.azurecr.io/petclinic/petclinic-admin-server
    docker image ls

    Note: the Docker files must be named Dockerfile See Azure/azure-cli-extensions#5041

    Understanding the Spring Petclinic application

    See the presentation of the Spring Petclinic Framework version

    A blog bost introducing the Spring Petclinic Microsevices (french language)

    You can then access petclinic here: http://localhost:8080/

    Spring Petclinic Microservices screenshot

    Architecture diagram of the Spring Petclinic Microservices

    Spring Petclinic Microservices architecture

    The UI code is located at spring-petclinic-api-gateway\src\main\resources\static\scripts

    The Spring Zuul(Netflix Intelligent Routing) config at https://github.com/ezYakaEagle442/aks-cfg-srv/blob/main/api-gateway.yml has been deprecated and replaced by the Spring Cloud Gateway.

    The Spring Cloud Gateway routing is configured at spring-petclinic-api-gateway/src/main/resources/application.yml

    The API Gateway Controller is located at spring-petclinic-api-gateway/src/main/java/org/springframework/samples/petclinic/api/boundary/web/ApiGatewayController.java

    Note: The Spring Cloud Discovery Server is NOT deployed as the underlying K8S/AKS discovery/DNS service is used. see :

    The K8S routing is configured in the Ingress resources at :

    • spring-petclinic-api-gateway\k8s\petclinic-ui-ingress.yaml
    • spring-petclinic-admin-server\k8s\petclinic-admin-server-ingress.yaml
    • spring-petclinic-config-server\k8s\petclinic-config-server-ingress.yaml
    • spring-petclinic-customers-service\k8s\petclinic-customer-ingress.yaml
    • spring-petclinic-vets-service\k8s\petclinic-vet-ingress.yaml
    • spring-petclinic-visits-service\k8s\petclinic-visits-ingress.yaml

    The Git repo URL used by Spring config is set in spring-petclinic-config-server/src/main/resources/application.yml

    If you want to know more about the Spring Boot Admin server, you might be interested in https://github.com/codecentric/spring-boot-admin

    For Learning purpose the App uses Key Vault to fetch secrets like the DB password but if would be even better using Passwordless Features: https://aka.ms/delete-passwords

    Understand the Spring Cloud Config

    Read https://learn.microsoft.com/en-us/azure/spring-apps/quickstart-setup-config-server?tabs=Azure-portal&pivots=programming-language-java

    Spring Boot is a framework aimed to help developers to easily create and build stand-alone, production-grade Spring based Applications that you can “just run”.

    Spring Cloud Config provides server and client-side support for externalized configuration in a distributed system. With the Spring Cloud Config Server you have a central place to manage external properties for applications across all environments.

    Spring Cloud Config Server is a centralized service that via HTTP provides all the applications configuration (name-value pairs or equivalent YAML content). The server is embeddable in a Spring Boot application, by using the @EnableConfigServer annotation.

    In other words, the Spring Cloud Config Server is simply a Spring Boot application, configured as a Spring Cloud Config Server, and that is able to retrieve the properties from the configured property source. The property source can be a Git repository, svn or Consul service.

    A Spring Boot application properly configured, can take immediate advantage of the Spring Config Server. It also picks up some additional useful features related to Environment change events. Any Spring Boot application can easily be configured as a Spring Cloud Config Client.

    Containerize your Java applications

    See the Azure doc Each micro-service is containerized using a Dockerfile. Example at ./docker/petclinic-customers-service/Dockerfile

    About how to build the container image, read ACR doc

    Database configuration

    In its default configuration, Petclinic uses an in-memory database (HSQLDB) which gets populated at startup with data. A similar setup is provided for MySql in case a persistent database configuration is needed. Dependency for Connector/J, the MySQL JDBC driver is already included in the pom.xml files.

    Set MySql connection String

    You need to reconfigure the MySQL connection string with your own settings (you can get it from the Azure portal / petcliaks-mysql-server / Connection strings / JDBC): In the spring-petclinic-microservices-config/blob/main/application.yml :

    spring:
      config:
        activate:
          on-profile: mysql
      datasource:
        schema: classpath*:db/mysql/schema.sql
        data: classpath*:db/mysql/data.sql
        url: jdbc:mysql://petcliaks.mysql.database.azure.com:3306/petclinic?useSSL=true&requireSSL=true&enabledTLSProtocols=TLSv1.2&verifyServerCertificate=true
    

    In fact the spring.datasource.password will be automatically injected from KV secrets SPRING-DATASOURCE-PASSWORD using the config below in each micro-service : example for Customers-Service spring-petclinic-customers-service/src/main/resources/application.yml

    spring:
      cloud:
        azure:
          profile: # spring.cloud.azure.profile
            # subscription-id:
            tenant-id: ${AZURE_TENANT_ID}
          credential:
            managed-identity-enabled: true        
          keyvault:
            secret:
              enabled: true
              property-sources:
                - name: kv-cfg-XXX # KV Config for each App XXX
                  endpoint: ${SPRING_CLOUD_AZURE_KEY_VAULT_ENDPOINT}
                  credential:
                    managed-identity-enabled: true
                    client-id: ${XXXX_SVC_APP_IDENTITY_CLIENT_ID}
    ---
    

    You can check the DB connection with this sample project.

    Use the Spring ‘mysql’ profile

    To use a MySQL database, you have to start 3 microservices (visits-service, customers-service and vets-services) with the mysql Spring profile. Add the --spring.profiles.active=mysql as programm argument.

    In the application.yml of the [Configuration repository], set the initialization-mode to never ( or ALWAYS).

    If you are running the microservices with Docker, you have to add the mysql profile into the (Dockerfile)[docker/Dockerfile]:

    ENV SPRING_PROFILES_ACTIVE docker,mysql
    

    All MySQL flexible-server parameters are set in the sql-load workflow called by the IaC deployment workflow

    Observability

    Read the Application Insights docs :

    The config files are located in each micro-service at src/main/resources/applicationinsights.json The Java agent is downloaded in the App container in /tmp/app, you can have a look at a Docker file, example at ./docker/petclinic-customers-service/Dockerfile

    By default, Application Insights Java 3.x expects the configuration file to be named applicationinsights.json and to be located in the same directory as applicationinsights-agent-3.x.x.jar.

    You can specify your own configuration file path by using one of these two options:

    • APPLICATIONINSIGHTS_CONFIGURATION_FILE environment variable
    • applicationinsights.configuration.file Java system property

    In our configuration, in the containers the applicationinsights.json is located at BOOT-INF/classes/applicationinsights.json so we must set APPLICATIONINSIGHTS_CONFIGURATION_FILE=BOOT-INF/classes/applicationinsights.json

    Use the Petclinic application and make a few REST API calls

    Open the Petclinic application and try out a few tasks – view pet owners and their pets, vets, and schedule pet visits:

    open http://petclinic.westeurope.cloudapp.azure.com//

    You can also use your browser or curl the REST API exposed by the Petclinic application. The admin REST API allows you to create/update/remove items in Pet Owners, Pets, Vets and Visits. You can run the following curl commands:

    URL ex:

    with Custom domains : http://appinnohandsonlab.com/#!/welcome

    curl -X GET http://petclinic.westeurope.cloudapp.azure.com/api/customer/owners
    curl -X GET http://petclinic.westeurope.cloudapp.azure.com/api/customer/owners/4
    curl -X GET http://petclinic.westeurope.cloudapp.azure.com/api/customer/owners/ 
    curl -X GET http://petclinic.westeurope.cloudapp.azure.com/api/customer/petTypes
    curl -X GET http://petclinic.westeurope.cloudapp.azure.com/api/customer/owners/3/pets/4
    curl -X GET http://petclinic.westeurope.cloudapp.azure.com/api/customer/owners/6/pets/8/
    curl -X GET http://petclinic.westeurope.cloudapp.azure.com/api/vet/vets
    curl -X GET http://petclinic.westeurope.cloudapp.azure.com/api/visit/owners/6/pets/8/visits
    curl -X GET http://petclinic.westeurope.cloudapp.azure.com/api/visit/owners/6/pets/8/visits

    Open Actuator endpoints for API Gateway and Customers Service apps

    Spring Boot includes a number of additional features to help you monitor and manage your application when you push it to production (Spring Boot Actuator: Production-ready Features). You can choose to manage and monitor your application by using HTTP endpoints or with JMX. Auditing, health, and metrics gathering can also be automatically applied to your application.

    Actuator endpoints let you monitor and interact with your application. By default, Spring Boot application exposes health and info endpoints to show arbitrary application info and health information. Apps in this project are pre-configured to expose all the Actuator endpoints.

    You can try them out by opening the following app actuator endpoints in a browser:

    http://petclinic.westeurope.cloudapp.azure.com
    
    open http://petclinic.westeurope.cloudapp.azure.com/manage/
    open http://petclinic.westeurope.cloudapp.azure.com/manage/env
    open http://petclinic.westeurope.cloudapp.azure.com/manage/configprops
    
    open http://petclinic.westeurope.cloudapp.azure.com/api/customer/manage
    open http://petclinic.westeurope.cloudapp.azure.com/api/customer/manage/env
    open http://petclinic.westeurope.cloudapp.azure.com/api/customer/manage/configprops
    
    ### Monitor Petclinic logs and metrics in Azure Log Analytics
    
    To get the App logs :
    ```bash
    LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az monitor log-analytics workspace show -n $LOG_ANALYTICS_WORKSPACE -g $RESOURCE_GROUP --query customerId  --out tsv`
    
    az monitor log-analytics query \
      --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
      --analytics-query "ContainerLog | where LogEntry has 'error' |take 100" \
      --out table
    

    Kusto Query with Log Analytics

    Open the Log Analytics that you created – you can find the Log Analytics in the same Resource Group where you created the AKS cluster.

    In the Log Analyics page, selects Logs blade and run any of the sample queries supplied below for AKS.

    Type and run the following Kusto query to see all the logs from the AKS Service :

    // https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-log-query
    let startTimestamp = ago(1h);
    KubePodInventory
    | where TimeGenerated > startTimestamp
    | project ContainerID, PodName=Name, Namespace
    | where PodName contains "service" and Namespace startswith "petclinic"
    | distinct ContainerID, PodName
    | join
    (
        ContainerLog
        | where TimeGenerated > startTimestamp
    )
    on ContainerID
    // at this point before the next pipe, columns from both tables are available to be "projected". Due to both
    // tables having a "Name" column, we assign an alias as PodName to one column which we actually want
    | project TimeGenerated, PodName, LogEntrySource, LogEntry
    | summarize by TimeGenerated, LogEntry
    | order by TimeGenerated desc
    
    
    let FindString = "error";//Please update term  you would like to find in LogEntry here
    ContainerLog 
    | where LogEntry has FindString 
    | take 100

    Custom metrics

    Spring Boot registers a lot number of core metrics: JVM, CPU, Tomcat, Logback… The Spring Boot auto-configuration enables the instrumentation of requests handled by Spring MVC. All those three REST controllers OwnerResource, PetResource and VisitResource have been instrumented by the @Timed Micrometer annotation at class level.

    • customers-service application has the following custom metrics enabled:
      • @Timed: petclinic.owner
      • @Timed: petclinic.pet
    • visits-service application has the following custom metrics enabled:
      • @Timed: petclinic.visit

    Scaling

    TODO ! see https://github.com/MicrosoftLearning/Deploying-and-Running-Java-Applications-in-Azure-Spring-Apps/blob/master/Instructions/Labs/LAB_05_implement_messaging_asc.md

    Resiliency

    Circuit breakers TODO !

    Troubleshoot

    If you face this error :

    Caused by: java.sql.SQLException: Connections using insecure transport are prohibited while --require_secure_transport=ON.

    It might be related to the Spring Config configured at https://github.com/Azure-Samples/spring-petclinic-microservices-config/blob/master/application.yml which on-profile: mysql is set with datasource url : jdbc:mysql://${MYSQL_SERVER_FULL_NAME}:3306/${MYSQL_DATABASE_NAME}?useSSL=false

    Check the MySQL connector doc Your JBCC URL should look like this for instance: url: jdbc:mysql://localhost:3306/petclinic?useSSL=false url: jdbc:mysql://${MYSQL_SERVER_FULL_NAME}:3306/${MYSQL_DATABASE_NAME}??useSSL=true url: jdbc:mysql://petclinic-mysql-server.mysql.database.azure.com:3306/petclinic?useSSL=true url: jdbc:mysql://petclinic-mysql-server.mysql.database.azure.com:3306/petclinic?useSSL=true&requireSSL=true&enabledTLSProtocols=TLSv1.2&verifyServerCertificate=true

    If you face this Netty SSL Handshake issue :

    eactor.core.Exceptions$ReactiveException: io.netty.handler.ssl.SslHandshakeTimeoutException: handshake timed out after 10000ms

    It means that you may need to upgrade your Spring Boot version to the latest one… See netty/netty#12343

    If you face this issue :

    error Caused by: java.net.MalformedURLException: no protocol: ${SPRING_CLOUD_AZURE_KEY_VAULT_ENDPOINT}

    It means that the api-gateway project had been built with mvn -B clean package –file pom.xml -DskipTests -Denv=cloud This set the env=cloud at in the parent POM which then injects the spring-cloud-azure-starter-keyvault-secrets dependency at POM it looks like event just having such dependency would cause the runtime to look for ${SPRING_CLOUD_AZURE_KEY_VAULT_ENDPOINT}

    If you face this issue :

    Spring MVC found on classpath, which is incompatible with Spring Cloud Gateway
    Please set spring.main.web-application-type=reactive or remove spring-boot-starter-web dependency.

    See: –https://cloud.spring.io/spring-cloud-gateway/reference/html/#gateway-starter

    spring-cloud-starter-netflix-eureka-server depends on spring-boot-starter-web you would need to remove the dependency on spring-boot-starter-web in the api-gateway module

    check with : mvn dependency:tree

    mvn dependency:tree | grep spring-boot-starter-web

    About How to use Env. variable in Spring Boot, see :

    Key Vault troubleshoot with USER-Assigned MI

    https://learn.microsoft.com/en-us/azure/spring-apps/tutorial-managed-identities-key-vault?tabs=user-assigned-managed-identity Fast-Track for Azure OpenLab aka Java OpenHack uses SYSTEM-Assigned MI

    The Azure SDK API change is summarized at Issue #28310

    KeyVault integration runs easily when :

    • You use SYSTEM-Assigned MI, because then in the Config use by the Config-server you do NOT need to specify the client-id
    • When you use 1 & only 1 USER-Assigned MI for ALL your Apps/Micro-services, this is not a good practice from a security perspective as it is safer to assign 1 Identity to each App

    When you use USER-Assigned MI, assigning 1 Identity to each App , see one App in Bicep. In the Config used by the Config-server if you declare as many property-sources as the number of micro-services setting the client-id with the App Id (using Env. Var. set in the GH Workflow) :

      keyvault:
        secret:
          enabled: true
          property-source-enabled: true
          property-sources:
            - name: kv-cfg-vets # KV Config for each App Vets-Service
              endpoint: ${SPRING_CLOUD_AZURE_KEY_VAULT_ENDPOINT}
              credential:
                managed-identity-enabled: true
                client-id: ${VETS_SVC_APP_IDENTITY_CLIENT_ID}
              #  client-secret: ${AZURE_CLIENT_SECRET} for SPN not for MI
              # profile:
              #  tenant-id: ${SPRING_CLOUD_AZURE_TENANT_ID}
            - name: kv-cfg-visits # KV Config for each App Visits-Service
              endpoint: ${SPRING_CLOUD_AZURE_KEY_VAULT_ENDPOINT}
              credential:
                managed-identity-enabled: true
                client-id: ${VISITS_SVC_APP_IDENTITY_CLIENT_ID}
            - name: kv-cfg-customers # KV Config for each App Customers-Service
              endpoint: ${SPRING_CLOUD_AZURE_KEY_VAULT_ENDPOINT}
              credential:
                managed-identity-enabled: true
                client-id: ${CUSTOMERS_SVC_APP_IDENTITY_CLIENT_ID}
    

    As a consequence this initially failed as each App uses the above Config and tried to fetch KV secrets from other App property-sources. which failed because it was not allowed as it was assigned only 1/4 Identity.

    The solution is to remove all the above config from the Config repo and to add it instead in each App in \src\main\resources\application.yaml.

    Ex for the vets-service, 1 & only 1 property-source is declared using 1 client-id only ${VETS_SVC_APP_IDENTITY_CLIENT_ID} :

    spring:
      cloud:
        azure:    
          #profile: # spring.cloud.azure.profile
            # subscription-id:
            # tenant-id: ${SPRING_CLOUD_AZURE_TENANT_ID}
          #credential:
            #managed-identity-enabled: true        
          keyvault:
            secret:
              enabled: true
              property-source-enabled: true
              # endpoint: ${SPRING_CLOUD_AZURE_KEY_VAULT_ENDPOINT}
              property-sources:
                - name: kv-cfg-vets # KV Config for each App Vets-Service
                  endpoint: ${SPRING_CLOUD_AZURE_KEY_VAULT_ENDPOINT}
                  credential:
                    managed-identity-enabled: true
                    client-id: ${VETS_SVC_APP_IDENTITY_CLIENT_ID}
                  #  client-secret: ${AZURE_CLIENT_SECRET} for SPN not for MI
                  # profile:
                  #  tenant-id: ${SPRING_CLOUD_AZURE_TENANT_ID}
      profiles:
        active: mysql    
    

    Contributing

    The issue tracker is the preferred channel for bug reports, features requests and submitting pull requests.

    For pull requests, editor preferences are available in the editor config for easy use in common text editors. Read more and download plugins at http://editorconfig.org.

    Credits

    https://github.com/ezYakaEagle442/azure-spring-apps-petclinic-mic-srv has been forked from https://github.com/Azure-Samples/spring-petclinic-microservices, itself already forked from https://github.com/spring-petclinic/spring-petclinic-microservices

    Note regarding GitHub Forks

    It is not possible to fork twice a repository using the same user account. However you can duplicate a repository

    This repo https://github.com/ezYakaEagle442/aks-java-petclinic-mic-srv has been duplicated from https://github.com/spring-petclinic/spring-petclinic-microservices

    Visit original content creator repository https://github.com/ezYakaEagle442/aks-java-petclinic-mic-srv
  • PlotNeuralNet

    PlotNeuralNet

    DOI

    PlotNeuralNet is a Python package that provides tools to generate high-quality neural network architecture diagrams for research papers, presentations, and reports. It leverages LaTeX and Python for seamless integration into scientific workflows.

    This package is based on the original PlotNeuralNet by HarisIqbal88, with improvements for better usability and a modular package structure.


    Features

    • Programmatically generate neural network diagrams using Python.
    • Predefined layer types (e.g., Conv, Pool, SoftMax).
    • Easily extendable for custom layer shapes.
    • Pre-built LaTeX templates for popular architectures like AlexNet, FCN, and HED.
    • Fully structured as a Python package for streamlined integration into projects.

    Getting Started

    Installation

    1. Clone the repository:

      git clone https://github.com/<your-username>/PlotNeuralNet.git
      cd PlotNeuralNet
    2. Install the package:

      pip install .
    3. Verify the installation:

      import PlotNeuralNet
      print("PlotNeuralNet installed successfully!")

    Usage

    The package is organized to simplify the creation of diagrams. It includes Python modules (pycore and pyexamples) and LaTeX resources (layers).

    1. Python Usage

    Define an Architecture

    You can use the Python API to define your architecture programmatically. For example:

    from PlotNeuralNet.pycore import tikzeng
    from PlotNeuralNet.pycore.blocks import block_2ConvPool, block_Unconv
    
    # Define architecture
    arch = [
        tikzeng.to_head('..'),
        tikzeng.to_cor(),
        tikzeng.to_begin(),
    
        # Input image
        tikzeng.to_input('../examples/fcn8s/cats.jpg'),
    
        # Encoder
        *block_2ConvPool(name='b1', botton='input', top='b2', s_filer=256, n_filer=64),
        *block_2ConvPool(name='b2', botton='b2', top='b3', s_filer=128, n_filer=128),
    
        # Decoder
        *block_Unconv(name='b4', botton='b3', top='output', s_filer=64, n_filer=32),
    
        # Output layer
        tikzeng.to_ConvSoftMax(name='softmax', offset="(1,0,0)", to="(output-east)", width=1, height=30, depth=30),
        tikzeng.to_end(),
    ]
    
    # Generate the architecture diagram
    def main():
        tikzeng.to_generate(arch, "my_architecture.tex")
    
    if __name__ == "__main__":
        main()

    Compile and View the Diagram

    Run the Python script:

    python my_architecture.py

    Compile the .tex file with:

    bash ../tikzmake.sh my_architecture

    2. LaTeX Usage

    You can directly modify .tex files in the examples directory, such as examples/FCN-8 or examples/HED. Each .tex file demonstrates how to use LaTeX for defining architectures.

    To compile a .tex file, use:

    pdflatex <file>.tex

    3. Access Predefined Resources

    The package structure includes predefined resources for easy reuse:

    LaTeX Resources

    • Available in the PlotNeuralNet/layers/ directory.
    • Example LaTeX layer definitions:
      \input{layers/Box.sty}

    Examples

    • Predefined architectures like FCN, HED, AlexNet are in PlotNeuralNet/examples/.
    • Modify these examples to fit your use case.

    Python Scripts

    • Python examples for generating diagrams programmatically are in PlotNeuralNet/pyexamples/.
    • Example usage:
      python PlotNeuralNet/pyexamples/unet.py

    Package Structure

    The package is organized as follows:

    PlotNeuralNet/
    ├── LICENSE          # License file
    ├── MANIFEST.in      # File inclusion rules
    ├── README.md        # Documentation
    ├── setup.py         # Installation script
    ├── PlotNeuralNet/   # Main package directory
    │   ├── __init__.py  # Package initializer
    │   ├── pycore/      # Core Python modules
    │   ├── layers/      # LaTeX resources
    │   ├── examples/    # Predefined architectures in LaTeX
    │   ├── pyexamples/  # Python examples
    ├── dist/            # Build artifacts (after running setup.py)
    ├── build/           # Temporary build files
    

    Advanced Features

    1. Custom Layers

      • Extend pycore.blocks or create your own block definitions to support custom layers.
    2. Batch Processing

      • Use Python scripts to generate multiple architectures programmatically.
    3. Predefined Functions

      • block_2ConvPool and block_Unconv simplify common layer patterns.

    Acknowledgments

    This package is based on the original PlotNeuralNet by HarisIqbal88 and licensed under the MIT License.


    License

    This project is licensed under the MIT License. See the LICENSE file for more details.

    Visit original content creator repository https://github.com/kgruiz/PlotNeuralNet
  • commitlinter-maven-plugin

    Code Status

    Maven Central License Javadocs CircleCI codecov

    Usage

    This plugin lints your git commit message according to the rules you defined.
    It basically reads commit message from git repository, matches it with the Regex you provided, before linting each capture group according to your rules.

    <plugin>
      <groupId>ga.rugal.maven</groupId>
      <artifactId>commitlinter-maven-plugin</artifactId>
      <version>THE-VERSION-YOU-LIKE</version>
    </plugin>

    Then run command:

    mvn commitlinter:validate

    This will report nothing as we haven’t configure any linting rules.

    Show case

    asciicast

    Parameters

    Parameter Type Description Default
    captureGroups CaptureGroup[] List of CaptureGroups []
    captureGroup.caseFormat enum The case format we want to lint NONE
    captureGroup.max Integer The maximum length of this capture group Integer.MAX
    captureGroup.min Integer The minimum length of this capture group 0
    captureGroup.tense enum The tense of the initial word of this capture group NONE
    failOnError Boolean Whether to fail maven build on linting error false
    gitFolder String The git repository folder .git
    head String The pointer of git HEAD
    matchPattern Regex The regex to match commit message (.*)
    skip Boolean Whether to skip linting false
    testCommitMessage String The commit message to test with “”

    caseFormat

    case sample
    UPPERCASE THIS IS UPPER CASE/THIS_IS_UPPER_CASE_TOO
    LOWERCASE this is lower case/this_is_lower_case_too
    UPPERCAMELCASE ThisIsUpperCamelCase
    LOWERCAMELCASE thisIsLowerCamelCase
    KEBABCASE this-is-kebab-case
    SNAKECASE this_is_snake_case
    SENTENCECASE This is sentence case
    NONE ANY_case-you Like

    tense

    case sample
    PRESENT add new feature/create a function
    PAST added new feature/created a function
    THIRD_PARTY adds new feature/creates a function
    NONE any format you like

    Simple Example

    Please always make sure to wrap the capture group with () so the Regex matcher can capture it.

    With Basic Configuration

    <plugin>
      <groupId>ga.rugal.maven</groupId>
      <artifactId>commitlinter-maven-plugin</artifactId>
      <version>THE-VERSION-YOU-LIKE</version>
      <configuration>
        <matchPattern>([\w\s]+-\d+:\s)(.*)</matchPattern>
        <failOnError>true</failOnError>
        <captureGroups>
          <captureGroup>
            <max>10</max>
            <min>2</min>
            <caseFormat>LOWERCASE</caseFormat>
          </captureGroup>
          <captureGroup>
            <max>20</max>
            <tense>PRESENT</tense>
            <caseFormat>LOWERCASE</caseFormat>
          </captureGroup>
        </captureGroups>
      </configuration>
    </plugin>

    This configuration will match the git commit message with Regex, then lint them with the rules defined above.

    Bind With Lifecycle

    This will bind validate goal in validate phase of Maven lifecycle.

    <plugin>
      <groupId>ga.rugal.maven</groupId>
      <artifactId>commitlinter-maven-plugin</artifactId>
      <version>THE-VERSION-YOU-LIKE</version>
      <executions>
        <execution>
          <id>validate</id>
          <phase>validate</phase>
          <configuration>
            <matchPattern>([\w\s]+-\d+:\s)(.*)</matchPattern>
            <failOnError>true</failOnError>
            <captureGroups>
              <captureGroup>
                <caseFormat>LOWERCASE</caseFormat>
              </captureGroup>
              <captureGroup>
                <caseFormat>LOWERCASE</caseFormat>
              </captureGroup>
            </captureGroups>
          </configuration>
          <goals>
            <goal>validate</goal>
          </goals>
        </execution>
      </executions>
    </plugin>

    Credit

    • The creation of this plugin is inspired by commitlint
    Visit original content creator repository https://github.com/Rugal/commitlinter-maven-plugin
  • tss-lib

    Multi-Party Threshold Signature Scheme

    MIT licensed GoDoc Go Report Card

    Permissively MIT Licensed.

    Note! This is a library for developers. You may find a TSS tool that you can use with the Binance Chain CLI here.

    Introduction

    This is an implementation of multi-party {t,n}-threshold ECDSA (Elliptic Curve Digital Signature Algorithm) based on Gennaro and Goldfeder CCS 2018 1 and EdDSA (Edwards-curve Digital Signature Algorithm) following a similar approach.

    This library includes three protocols:

    • Key Generation for creating secret shares with no trusted dealer (“keygen”).
    • Signing for using the secret shares to generate a signature (“signing”).
    • Dynamic Groups to change the group of participants while keeping the secret (“resharing”).

    ⚠️ Do not miss these important notes on implementing this library securely

    Rationale

    ECDSA is used extensively for crypto-currencies such as Bitcoin, Ethereum (secp256k1 curve), NEO (NIST P-256 curve) and many more.

    EdDSA is used extensively for crypto-currencies such as Cardano, Aeternity, Stellar Lumens and many more.

    For such currencies this technique may be used to create crypto wallets where multiple parties must collaborate to sign transactions. See MultiSig Use Cases

    One secret share per key/address is stored locally by each participant and these are kept safe by the protocol – they are never revealed to others at any time. Moreover, there is no trusted dealer of the shares.

    In contrast to MultiSig solutions, transactions produced by TSS preserve the privacy of the signers by not revealing which t+1 participants were involved in their signing.

    There is also a performance bonus in that blockchain nodes may check the validity of a signature without any extra MultiSig logic or processing.

    Usage

    You should start by creating an instance of a LocalParty and giving it the arguments that it needs.

    The LocalParty that you use should be from the keygen, signing or resharing package depending on what you want to do.

    Setup

    // When using the keygen party it is recommended that you pre-compute the "safe primes" and Paillier secret beforehand because this can take some time.
    // This code will generate those parameters using a concurrency limit equal to the number of available CPU cores.
    preParams, _ := keygen.GeneratePreParams(1 * time.Minute)
    
    // Create a `*PartyID` for each participating peer on the network (you should call `tss.NewPartyID` for each one)
    parties := tss.SortPartyIDs(getParticipantPartyIDs())
    
    // Set up the parameters
    // Note: The `id` and `moniker` fields are for convenience to allow you to easily track participants.
    // The `id` should be a unique string representing this party in the network and `moniker` can be anything (even left blank).
    // The `uniqueKey` is a unique identifying key for this peer (such as its p2p public key) as a big.Int.
    thisParty := tss.NewPartyID(id, moniker, uniqueKey)
    ctx := tss.NewPeerContext(parties)
    
    // Select an elliptic curve
    // use ECDSA
    curve := tss.S256()
    // or use EdDSA
    // curve := tss.Edwards()
    
    params := tss.NewParameters(curve, ctx, thisParty, len(parties), threshold)
    
    // You should keep a local mapping of `id` strings to `*PartyID` instances so that an incoming message can have its origin party's `*PartyID` recovered for passing to `UpdateFromBytes` (see below)
    partyIDMap := make(map[string]*PartyID)
    for _, id := range parties {
        partyIDMap[id.Id] = id
    }

    Keygen

    Use the keygen.LocalParty for the keygen protocol. The save data you receive through the endCh upon completion of the protocol should be persisted to secure storage.

    party := keygen.NewLocalParty(params, outCh, endCh, preParams) // Omit the last arg to compute the pre-params in round 1
    go func() {
        err := party.Start()
        // handle err ...
    }()

    Signing

    Use the signing.LocalParty for signing and provide it with a message to sign. It requires the key data obtained from the keygen protocol. The signature will be sent through the endCh once completed.

    Please note that t+1 signers are required to sign a message and for optimal usage no more than this should be involved. Each signer should have the same view of who the t+1 signers are.

    party := signing.NewLocalParty(message, params, ourKeyData, outCh, endCh)
    go func() {
        err := party.Start()
        // handle err ...
    }()

    Re-Sharing

    Use the resharing.LocalParty to re-distribute the secret shares. The save data received through the endCh should overwrite the existing key data in storage, or write new data if the party is receiving a new share.

    Please note that ReSharingParameters is used to give this Party more context about the re-sharing that should be carried out.

    party := resharing.NewLocalParty(params, ourKeyData, outCh, endCh)
    go func() {
        err := party.Start()
        // handle err ...
    }()

    ⚠️ During re-sharing the key data may be modified during the rounds. Do not ever overwrite any data saved on disk until the final struct has been received through the end channel.

    Messaging

    In these examples the outCh will collect outgoing messages from the party and the endCh will receive save data or a signature when the protocol is complete.

    During the protocol you should provide the party with updates received from other participating parties on the network.

    A Party has two thread-safe methods on it for receiving updates.

    // The main entry point when updating a party's state from the wire
    UpdateFromBytes(wireBytes []byte, from *tss.PartyID, isBroadcast bool) (ok bool, err *tss.Error)
    // You may use this entry point to update a party's state when running locally or in tests
    Update(msg tss.ParsedMessage) (ok bool, err *tss.Error)

    And a tss.Message has the following two methods for converting messages to data for the wire:

    // Returns the encoded message bytes to send over the wire along with routing information
    WireBytes() ([]byte, *tss.MessageRouting, error)
    // Returns the protobuf wrapper message struct, used only in some exceptional scenarios (i.e. mobile apps)
    WireMsg() *tss.MessageWrapper

    In a typical use case, it is expected that a transport implementation will consume message bytes via the out channel of the local Party, send them to the destination(s) specified in the result of msg.GetTo(), and pass them to UpdateFromBytes on the receiving end.

    This way there is no need to deal with Marshal/Unmarshalling Protocol Buffers to implement a transport.

    Changes of Preparams of ECDSA in v2.0

    Two fields PaillierSK.P and PaillierSK.Q is added in version 2.0. They are used to generate Paillier key proofs. Key valuts generated from versions before 2.0 need to regenerate(resharing) the key valuts to update the praparams with the necessary fileds filled.

    How to use this securely

    ⚠️ This section is important. Be sure to read it!

    The transport for messaging is left to the application layer and is not provided by this library. Each one of the following paragraphs should be read and followed carefully as it is crucial that you implement a secure transport to ensure safety of the protocol.

    When you build a transport, it should offer a broadcast channel as well as point-to-point channels connecting every pair of parties. Your transport should also employ suitable end-to-end encryption (TLS with an AEAD cipher is recommended) between parties to ensure that a party can only read the messages sent to it.

    Within your transport, each message should be wrapped with a session ID that is unique to a single run of the keygen, signing or re-sharing rounds. This session ID should be agreed upon out-of-band and known only by the participating parties before the rounds begin. Upon receiving any message, your program should make sure that the received session ID matches the one that was agreed upon at the start.

    Additionally, there should be a mechanism in your transport to allow for “reliable broadcasts”, meaning parties can broadcast a message to other parties such that it’s guaranteed that each one receives the same message. There are several examples of algorithms online that do this by sharing and comparing hashes of received messages.

    Timeouts and errors should be handled by your application. The method WaitingFor may be called on a Party to get the set of other parties that it is still waiting for messages from. You may also get the set of culprit parties that caused an error from a *tss.Error.

    Security Audit

    A full review of this library was carried out by Kudelski Security and their final report was made available in October, 2019. A copy of this report audit-binance-tss-lib-final-20191018.pdf may be found in the v1.0.0 release notes of this repository.

    References

    [1] https://eprint.iacr.org/2019/114.pdf

    Visit original content creator repository https://github.com/bnb-chain/tss-lib
  • aliesce

    aliesce

    Write, save and run scripts in multiple languages from a single source file.

    Allows for a granular one-run project generation and transformation.

    Why?

    For smoother development of related code, to keep the source about as closely collocated as possible, or for learning by coding comparatively across languages.

    How?

    By preceding each script in the source file with a single line, called a tag line, which can hold values used to save and run the script. These values might be the output file extension and the name of the executable. For example, ### exs elixir could be the tag line to save and run a simple Elixir script.

    Source file setup

    Create a file to hold the scripts. Give it any name, and any file extension or none. Use the current default name – ‘src.txt’ – to avoid passing an argument later.

    As you add each script to the file, insert above it a tag line starting by default with ###. A tag line might include the following elements separated by one or more spaces:

    • first, the file extension for that language, or the full output filename including extension, or the full output path including directory and extension
    • next, the command to be run, if any, e.g. the program to be used to run the file as well as any arguments to pass to that program – note that the path to the output file is added as the final argument by default

    For example, a possible tag line with both of these elements and the corresponding script in Elixir:

    ### exs elixir -r setup
    
    IO.puts("Up and running...")
    

    This tells aliesce to save the script below the tag line in a file with the exs extension, then run that file with the elixir command, applying one option, to require a file named ‘setup’. For convenience, this required file could be generated as part of the same run from an earlier script in the source.

    For alternatives to this tag line content, see There’s more… below.

    For a source file template and a means of appending scripts written in other files via the command line, see Options.

    Running aliesce

    If aliesce is compiled and ready to go (see Getting started below), run the aliesce command, adding the source file path if not the default.

    For example, for a source file named only ‘src’:

    aliesce src

    The script files are saved and run in order of appearance in the source file.

    There’s more…

    Specifying paths

    The stem of the output filename will be the stem of the source filename, i.e. ‘src’ by default. The file is saved by default to a folder in the current directory named scripts, which is created if not present. This default directory can be overridden via the command line (see Options below).

    For an output file named ‘script.exs’, the following would be used:

    ### script.exs elixir -r setup
    

    For an output directory named ‘elixir’ holding ‘script.exs’:

    ### elixir/script.exs elixir -r setup
    

    For a subdirectory within the default or overridden output directory, a placeholder can be used, by default >. For an output path of ‘scripts/elixir/script.exs’, i.e. with the default output directory name and the subdirectory and script named as above:

    ### >/elixir/script.exs elixir -r setup
    

    Extending commands

    For a command in which the path to the file is not the last argument, e.g. when piping to another program, a placeholder can be used, by default ><. The whole is then run by the default program-flag pair bash -c. For a command of bash -c "elixir -r setup scripts/src.exs | sort":

    ### exs elixir -r setup >< | sort
    

    The output path of a different script can be selected by using its number in the placeholder. For the output path of script no. 1, rather than the fixed ‘setup’:

    ### exs elixir -r >1< >< | sort
    

    Avoiding stages

    To avoid a script being saved and run, simply include the ! signal as a tag line element, before the extension or full output filename or path:

    ### ! script.exs elixir -r setup
    

    To save the script but avoid the run stage, include the ! signal as an element after the extension or full output filename or path but before the command to run the code:

    ### script.exs ! elixir -r setup
    

    Alternatively, a specific subset of scripts can be included (see Options below), to avoid the need to add tag line elements to others.

    Labelling scripts

    To add a label to a script, include it after the tag head and follow it with the tag tail, which is # by default:

    ### script label # script.exs elixir -r setup
    

    Spacing between tag head and tail is retained for list entries (see Options below).

    Options

    The following can be passed to aliesce before any source file path:

    • --dest / -d DIRNAME, to set the default output dirname (‘scripts’) to DIRNAME
    • --list / -l, to print for each script in the source (def. ‘src.txt’) its number and tag line content, without saving or running
    • --only / -o SUBSET, to include only the scripts the numbers of which appear in SUBSET, comma-separated and/or as ranges, e.g. -o 1,3-5
    • --push / -p LINE PATH, to append to the source (def. ‘src.txt’) LINE, adding the tag head if none, followed by the content at PATH then exit
    • --edit / -e N LINE, to update the tag line for script number N to LINE, adding the tag head if none, then exit
    • --init / -i, to create a source (def. ‘src.txt’) then exit
    • --version / -v, to show name and version number then exit
    • --help / -h, to show usage, flags available and notes then exit

    Provision in-file

    Any or all of the options above can also be selected by providing their arguments in the source file itself, avoiding the need to list them with each use of the aliesce command.

    Arguments provided in-file are simply placed above the initial tag line, arranged in the usual order, whether on a single line or multiple. They are processed each time the file is handled by aliesce.

    Arguments passed directly on the command line are processed first, followed by those in the file, with the latter overriding the former in the event that an option is selected using both approaches.

    This is similar to the use of the source file directly via hashbang, described in Getting started below.

    Streams

    One or more paths can be piped to aliesce to append the content at each to the source file as a script, auto-preceded by a tag line including the ! signal, then exit.

    Defaults

    The default core path, tag, signal, placeholder and command values are defined close to the top of the project source file, i.e. ‘src/main.rs’, should you prefer to modify any pre-compilation (see Getting started below).

    The default temporary test directory is defined close to the top of the test module, also in the project source file.

    Getting started

    With Rust and Cargo installed, at the root of the aliesce directory run cargo build --release to compile. The binary is created in the ‘target/release’ directory.

    The binary can be run with the command ./aliesce while in the same directory, and from elsewhere using the pattern path/to/aliesce. It can be run from any directory with aliesce by placing it in a directory listed in $PATH, presumably ‘/bin’ or ‘/usr/bin’.

    A source file can be used directly by adding to the top of the file a hashbang with the path to the aliesce binary, e.g. #!/usr/bin/aliesce. If flags are to be passed (see Options above), it may be possible to use the env binary with its split string option, e.g. #!/bin/env -S aliesce <flag>[ ...]. This inclusion of flags is similar to the approach described in Provision in-file above.

    Making changes

    Running the tests after making changes and adding tests to cover new behaviour is recommended.

    Tests

    The tests can be run with the following command:

    cargo test -- --test-threads=1

    For the purpose of testing a subset of CLI options a temporary test directory is created (see Defaults above). The flag setting thread count ensures that the test cases are run in series, allowing for setup and teardown.

    The tests themselves are in the test module at the base of the file.

    Development plan

    The following are the expected next steps in the development of the code base. The general medium-term aim is a convenient parallel scripting tool. Pull requests are welcome for these and other potential improvements.

    • add source file variables available to tag line and script:
      • passed to aliesce via CLI
      • declared in file, including from the environment
      • for defaults
    • extend and/or revise the set of placeholders for:
      • all default path parts
      • use across save path and command
    • provide tag line options for:
      • multiple save paths
      • auxiliary commands
    • provide or extend CLI options for:
      • output verbosity
      • applying a single stage
      • listing save paths
      • importing a script to an arbitrary position
      • interaction with existing scripts:
        • reordering
        • deleting
    • refactor as more idiomatic
    • improve error handling
    • extend test module

    Visit original content creator repository
    https://github.com/barcek/aliesce

  • YCDownloadSession

    Platform Support CocoaPods Carthage compatible Build Status

    通过Cocoapods安装

    安装Cocoapods

    $ brew install ruby
    $ sudo gem install cocoapods
    

    Podfile

    分成主要两个包:

    • Core : YCDownloader 只有下载器
    • Mgr : YCDownloader , YCDownloadManager 所有
    source 'https://github.com/CocoaPods/Specs.git'
    platform :ios, '8.0'
    
    target 'TargetName' do
        pod 'YCDownloadSession', '~> 2.0.2', :subspecs => ['Core', 'Mgr']
    end
    

    然后安装依赖库:

    $ pod install
    

    提示错误 [!] Unable to find a specification for YCDownloadSession 解决办法:

    $ pod repo update master
    

    通过Carthage安装

    安装carthage:

    brew install carthage
    

    添加下面配置到Cartfile里:

    github "onezens/YCDownloadSession"
    

    安装, 然后添加Framework到项目:

    carthage update --platform ios
    

    用法

    注意事项

    1、在真机上测试,用户想要完整使用后台下载功能,需要在系统设置页,检查是否开启后台 App 刷新;如果没有开启,则第一个任务下载完成后,无法后台唤醒,开启下一个下载任务

    引用头文件

    #import <YCDownloadSession.h>
    

    AppDelegate设置后台下载成功回调方法

    -(void)application:(UIApplication *)application handleEventsForBackgroundURLSession:(NSString *)identifier completionHandler:(void (^)(void))completionHandler{
        [[YCDownloader downloader] addCompletionHandler:completionHandler identifier:identifier];
    }
    

    下载器 YCDownloader

    创建下载任务

    YCDownloadTask *task = [[YCDownloader downloader] downloadWithUrl:@"download_url" progress:^(NSProgress * _Nonnull progress, YCDownloadTask * _Nonnull task) {
        NSLog(@"progress: %f", progress.fractionCompleted); 
    } completion:^(NSString * _Nullable localPath, NSError * _Nullable error) {
        // handler download task completed callback
    }];
    

    开始下载任务:

    [[YCDownloader downloader] resumeTask:self.downloadTask];
    

    暂停下载任务:

    [[YCDownloader downloader] pauseTask:self.downloadTask];
    

    删除下载任务:

    [[YCDownloader downloader] cancelTask:self.downloadTask];
    

    异常退出应用后,恢复之前正在进行的任务的回调

    /**
     恢复下载任务,继续下载任务,主要用于app异常退出状态恢复,继续下载任务的回调设置
    
     @param tid 下载任务的taskId
     @param progress 下载进度回调
     @param completion 下载成功失败回调
     @return 下载任务task
     */
    - (nullable YCDownloadTask *)resumeDownloadTaskWithTid:(NSString *)tid progress:(YCProgressHandler)progress completion:(YCCompletionHandler)completion;
    

    下载任务管理器YCDownloadManager

    设置任务管理器配置

    NSString *path = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, true).firstObject;
    path = [path stringByAppendingPathComponent:@"download"];
    YCDConfig *config = [YCDConfig new];
    config.saveRootPath = path;
    config.uid = @"100006";
    config.maxTaskCount = 3;
    config.taskCachekMode = YCDownloadTaskCacheModeKeep;
    config.launchAutoResumeDownload = true;
    [YCDownloadManager mgrWithConfig:config];
    

    下载任务相关通知

    //某一个YCDownloadItem下载成功通知
    [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(downloadTaskFinishedNoti:) name:kDownloadTaskFinishedNoti object:nil];
    //mgr 管理的所有任务完成通知
    [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(downloadAllTaskFinished) name:kDownloadTaskAllFinishedNoti object:nil];
    

    开始下载任务

    YCDownloadItem *item = [YCDownloadItem itemWithUrl:model.mp4_url fileId:model.file_id];
    item.extraData = ...;
    [YCDownloadManager startDownloadWithItem:item];
    

    下载相关控制

    /**
    暂停一个后台下载任务
         
    @param item 创建的下载任务item
    */
    + (void)pauseDownloadWithItem:(nonnull YCDownloadItem *)item;
        
    /**
    继续开始一个后台下载任务
         
    @param item 创建的下载任务item
    */
    + (void)resumeDownloadWithItem:(nonnull YCDownloadItem *)item;
        
    /**
    删除一个后台下载任务,同时会删除当前任务下载的缓存数据
         
    @param item 创建的下载任务item
    */
    + (void)stopDownloadWithItem:(nonnull YCDownloadItem *)item;
    

    蜂窝煤网络访问控制

    /**
    是否允许蜂窝煤网络下载,以及网络状态变为蜂窝煤是否允许下载,必须把所有的downloadTask全部暂停,然后重新创建。否则,原先创建的
    下载task依旧在网络切换为蜂窝煤网络时会继续下载
         
    @param isAllow 是否允许蜂窝煤网络下载
    */
    + (void)allowsCellularAccess:(BOOL)isAllow;
        
    /**
    获取是否允许蜂窝煤访问
    */
    + (BOOL)isAllowsCellularAccess;
    

    使用效果图

    单文件下载测试

    单文件下载测试

    多视频下载测试

    多视频下载测试

    下载通知

    下载通知

    Visit original content creator repository https://github.com/onezens/YCDownloadSession
  • YCDownloadSession

    Platform Support CocoaPods Carthage compatible Build Status

    通过Cocoapods安装

    安装Cocoapods

    $ brew install ruby
    $ sudo gem install cocoapods
    

    Podfile

    分成主要两个包:

    • Core : YCDownloader 只有下载器
    • Mgr : YCDownloader , YCDownloadManager 所有
    source 'https://github.com/CocoaPods/Specs.git'
    platform :ios, '8.0'
    
    target 'TargetName' do
        pod 'YCDownloadSession', '~> 2.0.2', :subspecs => ['Core', 'Mgr']
    end
    

    然后安装依赖库:

    $ pod install
    

    提示错误 [!] Unable to find a specification for YCDownloadSession 解决办法:

    $ pod repo update master
    

    通过Carthage安装

    安装carthage:

    brew install carthage
    

    添加下面配置到Cartfile里:

    github "onezens/YCDownloadSession"
    

    安装, 然后添加Framework到项目:

    carthage update --platform ios
    

    用法

    注意事项

    1、在真机上测试,用户想要完整使用后台下载功能,需要在系统设置页,检查是否开启后台 App 刷新;如果没有开启,则第一个任务下载完成后,无法后台唤醒,开启下一个下载任务

    引用头文件

    #import <YCDownloadSession.h>
    

    AppDelegate设置后台下载成功回调方法

    -(void)application:(UIApplication *)application handleEventsForBackgroundURLSession:(NSString *)identifier completionHandler:(void (^)(void))completionHandler{
        [[YCDownloader downloader] addCompletionHandler:completionHandler identifier:identifier];
    }
    

    下载器 YCDownloader

    创建下载任务

    YCDownloadTask *task = [[YCDownloader downloader] downloadWithUrl:@"download_url" progress:^(NSProgress * _Nonnull progress, YCDownloadTask * _Nonnull task) {
        NSLog(@"progress: %f", progress.fractionCompleted); 
    } completion:^(NSString * _Nullable localPath, NSError * _Nullable error) {
        // handler download task completed callback
    }];
    

    开始下载任务:

    [[YCDownloader downloader] resumeTask:self.downloadTask];
    

    暂停下载任务:

    [[YCDownloader downloader] pauseTask:self.downloadTask];
    

    删除下载任务:

    [[YCDownloader downloader] cancelTask:self.downloadTask];
    

    异常退出应用后,恢复之前正在进行的任务的回调

    /**
     恢复下载任务,继续下载任务,主要用于app异常退出状态恢复,继续下载任务的回调设置
    
     @param tid 下载任务的taskId
     @param progress 下载进度回调
     @param completion 下载成功失败回调
     @return 下载任务task
     */
    - (nullable YCDownloadTask *)resumeDownloadTaskWithTid:(NSString *)tid progress:(YCProgressHandler)progress completion:(YCCompletionHandler)completion;
    

    下载任务管理器YCDownloadManager

    设置任务管理器配置

    NSString *path = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, true).firstObject;
    path = [path stringByAppendingPathComponent:@"download"];
    YCDConfig *config = [YCDConfig new];
    config.saveRootPath = path;
    config.uid = @"100006";
    config.maxTaskCount = 3;
    config.taskCachekMode = YCDownloadTaskCacheModeKeep;
    config.launchAutoResumeDownload = true;
    [YCDownloadManager mgrWithConfig:config];
    

    下载任务相关通知

    //某一个YCDownloadItem下载成功通知
    [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(downloadTaskFinishedNoti:) name:kDownloadTaskFinishedNoti object:nil];
    //mgr 管理的所有任务完成通知
    [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(downloadAllTaskFinished) name:kDownloadTaskAllFinishedNoti object:nil];
    

    开始下载任务

    YCDownloadItem *item = [YCDownloadItem itemWithUrl:model.mp4_url fileId:model.file_id];
    item.extraData = ...;
    [YCDownloadManager startDownloadWithItem:item];
    

    下载相关控制

    /**
    暂停一个后台下载任务
         
    @param item 创建的下载任务item
    */
    + (void)pauseDownloadWithItem:(nonnull YCDownloadItem *)item;
        
    /**
    继续开始一个后台下载任务
         
    @param item 创建的下载任务item
    */
    + (void)resumeDownloadWithItem:(nonnull YCDownloadItem *)item;
        
    /**
    删除一个后台下载任务,同时会删除当前任务下载的缓存数据
         
    @param item 创建的下载任务item
    */
    + (void)stopDownloadWithItem:(nonnull YCDownloadItem *)item;
    

    蜂窝煤网络访问控制

    /**
    是否允许蜂窝煤网络下载,以及网络状态变为蜂窝煤是否允许下载,必须把所有的downloadTask全部暂停,然后重新创建。否则,原先创建的
    下载task依旧在网络切换为蜂窝煤网络时会继续下载
         
    @param isAllow 是否允许蜂窝煤网络下载
    */
    + (void)allowsCellularAccess:(BOOL)isAllow;
        
    /**
    获取是否允许蜂窝煤访问
    */
    + (BOOL)isAllowsCellularAccess;
    

    使用效果图

    单文件下载测试

    单文件下载测试

    多视频下载测试

    多视频下载测试

    下载通知

    下载通知

    Visit original content creator repository https://github.com/onezens/YCDownloadSession
  • CombatShields

    Visit original content creator repository
    https://github.com/emipa606/CombatShields

  • axe-live

    axe-live


    recording.mp4


    About

    axe-live is a framework-agnostic tool for running accessibility checks against web
    applications. It uses Deque Labs’ axe library to highlight and disable
    elements on the page which have accessibility problems. The goal is to provide something like compiler
    errors for accessibilty during the development cycle. This should help problems get addressed right
    away rather than waiting on a QA process or user reports.

    Installation

    npm install axe-live
    

    or

    yarn add axe-live
    

    Setup

    When your app is running in development mode, start axe-live:

    import * as AxeLive from "axe-live";
    
    AxeLive.run();

    By default, axe-live will watch for changes to your document and try to efficiently re-check when it updates.

    You can customize the behavior by passing an options object to run():

    import * as AxeLive from "axe-live";
    
    AxeLive.run({
      // The node on the page that should be checked, defaults to document
      target: document.getElementById('#app'),
      // Whether or not to re-run Axe on DOM changes, defaults to true
      watch: true,
      // Whether or not to start with a minimal display, defaults to false
      minimized: false,
      // Axe configuration options, defaults to Axe defaults
      axeOptions: { runOnly: ['wcag2a', 'wcag2aa'] }
    });

    The axe configuration options are passed directly to axe-core’s
    axe.run
    options parameter.

    Important Notes

    Automated checks can ensure you’ve not made any basic mistakes, but are only part of a robust a11y solution.
    Many of the WCAG guidelines cannot be evaluated automatically, and require a human assesment. It’s worthwhile
    to try your app out with a screenreader and think about the usability of the experience for impared users.

    The axe-core libarary is very large. You should configure your build to only bundle axe-live when
    running in development mode. Otherwise your users will pay an unnecesarily high cost in download times
    for your app.

    On DOM changes, the watcher is conservative in what it asks Axe to check. Specifically, it only checks changed
    elements and elements that were previously in error. This is to make checks faster, but it may result in a rare
    miss in the event of a change that renders a previously valid elment invalid. (e.g. a label disappears, making
    a previously correct input invalid)

    Because items in error are re-checked when the DOM changes, it’s a good idea to fix any problems that affect
    large ancestor elements first. If your html or body elements have a problem, sort those out first so every
    change doesn’t re-check your whole page.

    The error highlights are generated from selectors output by axe, wich are only as specific as they need to be.
    If you have axe-live running while a page is adding elements, you may see some highlights briefly appear and
    then disappear as new elements are added that match older selectors before axe runs again.

    Frequent DOM changes like JS-based animations that update style attributes could lead to checks that run too
    frequently. You may want to turn off automatic re-checking for fewer pauses in that situation.

    Visit original content creator repository
    https://github.com/MattCheely/axe-live

  • axe-live

    axe-live


    recording.mp4


    About

    axe-live is a framework-agnostic tool for running accessibility checks against web
    applications. It uses Deque Labs’ axe library to highlight and disable
    elements on the page which have accessibility problems. The goal is to provide something like compiler
    errors for accessibilty during the development cycle. This should help problems get addressed right
    away rather than waiting on a QA process or user reports.

    Installation

    npm install axe-live
    

    or

    yarn add axe-live
    

    Setup

    When your app is running in development mode, start axe-live:

    import * as AxeLive from "axe-live";
    
    AxeLive.run();

    By default, axe-live will watch for changes to your document and try to efficiently re-check when it updates.

    You can customize the behavior by passing an options object to run():

    import * as AxeLive from "axe-live";
    
    AxeLive.run({
      // The node on the page that should be checked, defaults to document
      target: document.getElementById('#app'),
      // Whether or not to re-run Axe on DOM changes, defaults to true
      watch: true,
      // Whether or not to start with a minimal display, defaults to false
      minimized: false,
      // Axe configuration options, defaults to Axe defaults
      axeOptions: { runOnly: ['wcag2a', 'wcag2aa'] }
    });

    The axe configuration options are passed directly to axe-core’s
    axe.run
    options parameter.

    Important Notes

    Automated checks can ensure you’ve not made any basic mistakes, but are only part of a robust a11y solution.
    Many of the WCAG guidelines cannot be evaluated automatically, and require a human assesment. It’s worthwhile
    to try your app out with a screenreader and think about the usability of the experience for impared users.

    The axe-core libarary is very large. You should configure your build to only bundle axe-live when
    running in development mode. Otherwise your users will pay an unnecesarily high cost in download times
    for your app.

    On DOM changes, the watcher is conservative in what it asks Axe to check. Specifically, it only checks changed
    elements and elements that were previously in error. This is to make checks faster, but it may result in a rare
    miss in the event of a change that renders a previously valid elment invalid. (e.g. a label disappears, making
    a previously correct input invalid)

    Because items in error are re-checked when the DOM changes, it’s a good idea to fix any problems that affect
    large ancestor elements first. If your html or body elements have a problem, sort those out first so every
    change doesn’t re-check your whole page.

    The error highlights are generated from selectors output by axe, wich are only as specific as they need to be.
    If you have axe-live running while a page is adding elements, you may see some highlights briefly appear and
    then disappear as new elements are added that match older selectors before axe runs again.

    Frequent DOM changes like JS-based animations that update style attributes could lead to checks that run too
    frequently. You may want to turn off automatic re-checking for fewer pauses in that situation.

    Visit original content creator repository
    https://github.com/MattCheely/axe-live