Blog

  • artoolkit5-js

    artoolkit5-js

    ES6 module port of artoolkit5. Based on the (now defunct) original Emscripten to JavaScript port and improved by Walter Perdan.

    This build is uses WASM for best possible performance and is designed to be (more or less) a drop-in replacement for the previous jsartoolkit5. Some parts of the previous API have been refactored to implement an async interface instead of the previous callback based interface.

    Installation

    Install the module via NPM:

    npm install artoolkit5-js
    

    The module is built in UMD format and can be used in different environments:

    Browser

    <script src="https://github.com/path/to/ARToolkit.js"></script>
    

    Node.js

    const ARToolkit = require('artoolkit5-js');
    

    ES6 Import

    import ARToolkit from 'artoolkit5-js';
    

    Usage

    1) Create controller instance

    First you need to create an instance of ARController:

    ARController.initWithDimensions(640, 480, '/data/camera_para.dat').then(controller => { ... });
    

    This will create an ARController instance expecting source images of dimensions 640×480. The second parameter is a camera definition file which describes the characteristics of your image / video input device. If you don’t know which file to use just use the default camera_para.dat included with this repository.

    There is an alternative initializer initWithImage available as convenience method which accepts an HTMLImageElement or HTMLVideoElement instead of width / height. However this obviously only works in Browser (or MonkeyPatched) environments.

    2) Add markers you want to track

    Next you need to load the marker files to track with your controller. In this example the pattern file for the “Hiro” marker is loaded:

    controller.artoolkit.addMarker(controller.id, '/data/hiro.patt').then(hiroMarkerId => { ... });
    

    3) Start tracking

    // track with 60 FPS
    const FPS = 60;
    
    setInterval(() => {
    
      const result = controller.detectMarker();
      if(result !== 0) {
        // ARToolkit returning a value !== 0 means an error occured
        console.log('Error detecting markers');
        return;
      }
    
      // get the total number of detected markers in frame
      const markerNum = controller.getMarkerNum();
      let hiroFound = false;
    
      // check if one of the detected markers is the "Hiro" marker
      for(let i = 0; i < markerNum; i++) {
        const markerInfo = controller.getMarker(i);
        if(markerInfo.idPatt == hiroMarkerId) {
          // store the marker ID from the detection result
          hiroFound = i;
          break;
        }
      }
    
      if(hiroFound !== false) {
    	console.log("You have found the HIRO marker");
      }
    
    }, 1000 / FPS);
    

    Other ARToolkit API methods

    You can access all public ARToolkit methods and class constants like this:

      // for the full API documentation see
      // https://github.com/artoolkit/artoolkit5
      artoolkit.detectMarker( ... );
    
      console.log(artoolkit.AR_LOG_LEVEL_DEBUG);
    

    Current limitations

    Due to time constraints this build does not implement NFT and multimarker support (yet). Adding support for both should be trivial though as all the groundwork has already been laid out. I will implement it once time allows but PRs are of course welcome!

    Visit original content creator repository
    https://github.com/andypotato/artoolkit5-js

  • DripDash

    ⚠️ This repository is archived. Current development for DripDash is here.

    DripDash Header

    DripDash is an all-in-one aquaponics monitoring, control and data logging tool, built with Vue on Node.js.

    📝 About DripDash

    Aquaponics is hard. It’s a three-way ecosystem with plants, bacteria and fish living in balanced symbiosis. DripDash, along with the rest of the unPhone project, aims to make aquaponics easier.

    DripDash is constructed in three parts:

    • The frontend, constructed with Vue.js for control and monitoring.
    • The backend, integrating express, the database and GraphQL API.
    • The collector, a separate endpoint for communicating WaterElf devices.

    DripDash is part of the unPhone Project.

    📑 Documentation

    ⚡ Quick Start Guide

    For more detailed instructions, check out the Documentation.

    # Assuming nodejs and git are already present.
    
    # 1 - Clone the repository.
    git clone git@gitlab.com:unphone/dripdash.git
    cd dripdash
    
    # 2 - Install dependencies.
    npm install
    
    # 3 - Connect the database. (Set the connection string in prisma/.env)
    cp prisma/.env.example prisma/.env
    
    # 3.1 - Create the required structure in the database.
    npx prisma migrate up --experimental
    
    # 4a - Start for development.
    npm run serve
    
    # 4b - Alternatively, start for production.
    npm run build
    npm run production

    🪲 Issues & Bugs

    Found a bug? Feel free to raise an issue or submit a PR and we’ll take a look at it.

    👋 Contributors & Thanks

    DripDash is built and maintained by:

    Visit original content creator repository https://github.com/onfe/DripDash
  • typescript-http-client

    typescript-http-client

    Build Status js-standard-style

    A simple TypeScript HTTP client with Promise-based API and advanced filtering support

    Basic usage

    Simple GET request with string response:
    import expect from 'ceylon';
    import { Response, Request, newHttpClient } from 'typescript-http-client'
    
    (async () => {
      // Get a new client
      const client = newHttpClient()
      // Build the request
      const request = new Request('https://jsonplaceholder.typicode.com/todos/1', { responseType: 'text' })
      // Execute the request and get the response body as a string
      const responseBody = await client.execute<string>(request)
      expect(responseBody)
        .toExist()
        .toBeA('string')
        .toBe(`{
      "userId": 1,
      "id": 1,
      "title": "delectus aut autem",
      "completed": false
    }`)
    })()
    Typed response:
    import expect from 'ceylon';
    import { Response, Request, newHttpClient } from 'typescript-http-client'
    
    class Todo {
      completed: boolean
      id: number
      title: string
      userId: number
    }
    
    (async () => {
      // Get a new client
      const client = newHttpClient()
      // Build the request
      const request = new Request('https://jsonplaceholder.typicode.com/todos/1')
      // Execute the request and get the response body as a "Todo" object
      const todo = await client.execute<Todo>(request)
      expect(todo)
        .toExist()
        .toBeA('object')
      expect(todo.userId)
        .toBe(1)
    })()

    Filters

    Multiple filters can be added to the httpClient, in a certain order, forming a chain of filters.

    Filters can be used to:

    • Alter any request property (headers, url, body, etc…)
    • Alter any response property (headers, body, etc…)
    • Short-circuit the chain by returning a custom response without proceeding with the HTTP call, allowing for example for client-side caching.
    • Intercept some or all calls for debugging/logging purposes

    Filters must implement the Filter interface and implement the doFilter method:

    interface Filter<T, U> {
      doFilter(request: Request, filterChain: FilterChain<T>): Promise<Response<U>>
    }

    The request parameter contains the request (possibly already modified by previous filters) and can be modified by the filter (or ignored)

    The filterChain parameter represents the chain of filters following the current filter

    Filter full example : Transform the response body:

    This example transforms the fetched Todos and modify their title

    import expect from 'ceylon';
    import { Response, Request, Filter, FilterChain, newHttpClient } from 'typescript-http-client'
    
    class Todo {
      completed: boolean
      id: number
      title: string
      userId: number
    }
    
    // Transform Todos : Alter title
    class TodoTransformer implements Filter<Todo, Todo> {
      async doFilter (call: Request, filterChain: FilterChain<Todo>): Promise<Response<Todo>> {
        const response = await filterChain.doFilter(call)
        const todo = response.body
        todo.title = 'Modified title'
        return response
      }
    }
    
    (async () => {
      // Get a new client
      const client = newHttpClient()
      // Add our Todo tranformer filter
      client.addFilter(new TodoTransformer(), 'Todo transformer', {
        // Only apply to GET request with URL starting with 
        // 'https://jsonplaceholder.typicode.com/todos/'
        enabled(call: Request): boolean {
          return call.method === 'GET' && 
            call.url.startsWith('https://jsonplaceholder.typicode.com/todos/')
        }
      })
      // Build the request
      const request = new Request('https://jsonplaceholder.typicode.com/todos/1')
      // Execute the request and get the response body as an object
      const todo = await client.execute<Todo>(request)
      expect(todo)
        .toExist()
        .toBeA('object')
      expect(todo.userId)
        .toBe(1)
      expect(todo.title)
        .toBe('Modified title')
    })()

    Hierarchy of Filters

    Testing

    In the tests, you need to first in indicate which name space you are testing, and then precise which method. Both using describe. The entity tested is the first argument of describe. The second argument of describe, is a function. In the function, you need another function called: it. This function also takes two arguments. The first is a string that is usefull only for future developpers (does nothing in the code) saying what result we expect from our test, and the second is once again a method, ending with an assert this time. This last method is the test.

    The hook beforeEach executes before every test.

    Visit original content creator repository https://github.com/taktik/typescript-http-client
  • typescript-http-client

    typescript-http-client

    Build Status js-standard-style

    A simple TypeScript HTTP client with Promise-based API and advanced filtering support

    Basic usage

    Simple GET request with string response:
    import expect from 'ceylon';
    import { Response, Request, newHttpClient } from 'typescript-http-client'
    
    (async () => {
      // Get a new client
      const client = newHttpClient()
      // Build the request
      const request = new Request('https://jsonplaceholder.typicode.com/todos/1', { responseType: 'text' })
      // Execute the request and get the response body as a string
      const responseBody = await client.execute<string>(request)
      expect(responseBody)
        .toExist()
        .toBeA('string')
        .toBe(`{
      "userId": 1,
      "id": 1,
      "title": "delectus aut autem",
      "completed": false
    }`)
    })()
    Typed response:
    import expect from 'ceylon';
    import { Response, Request, newHttpClient } from 'typescript-http-client'
    
    class Todo {
      completed: boolean
      id: number
      title: string
      userId: number
    }
    
    (async () => {
      // Get a new client
      const client = newHttpClient()
      // Build the request
      const request = new Request('https://jsonplaceholder.typicode.com/todos/1')
      // Execute the request and get the response body as a "Todo" object
      const todo = await client.execute<Todo>(request)
      expect(todo)
        .toExist()
        .toBeA('object')
      expect(todo.userId)
        .toBe(1)
    })()

    Filters

    Multiple filters can be added to the httpClient, in a certain order, forming a chain of filters.

    Filters can be used to:

    • Alter any request property (headers, url, body, etc…)
    • Alter any response property (headers, body, etc…)
    • Short-circuit the chain by returning a custom response without proceeding with the HTTP call, allowing for example for client-side caching.
    • Intercept some or all calls for debugging/logging purposes

    Filters must implement the Filter interface and implement the doFilter method:

    interface Filter<T, U> {
      doFilter(request: Request, filterChain: FilterChain<T>): Promise<Response<U>>
    }

    The request parameter contains the request (possibly already modified by previous filters) and can be modified by the filter (or ignored)

    The filterChain parameter represents the chain of filters following the current filter

    Filter full example : Transform the response body:

    This example transforms the fetched Todos and modify their title

    import expect from 'ceylon';
    import { Response, Request, Filter, FilterChain, newHttpClient } from 'typescript-http-client'
    
    class Todo {
      completed: boolean
      id: number
      title: string
      userId: number
    }
    
    // Transform Todos : Alter title
    class TodoTransformer implements Filter<Todo, Todo> {
      async doFilter (call: Request, filterChain: FilterChain<Todo>): Promise<Response<Todo>> {
        const response = await filterChain.doFilter(call)
        const todo = response.body
        todo.title = 'Modified title'
        return response
      }
    }
    
    (async () => {
      // Get a new client
      const client = newHttpClient()
      // Add our Todo tranformer filter
      client.addFilter(new TodoTransformer(), 'Todo transformer', {
        // Only apply to GET request with URL starting with 
        // 'https://jsonplaceholder.typicode.com/todos/'
        enabled(call: Request): boolean {
          return call.method === 'GET' && 
            call.url.startsWith('https://jsonplaceholder.typicode.com/todos/')
        }
      })
      // Build the request
      const request = new Request('https://jsonplaceholder.typicode.com/todos/1')
      // Execute the request and get the response body as an object
      const todo = await client.execute<Todo>(request)
      expect(todo)
        .toExist()
        .toBeA('object')
      expect(todo.userId)
        .toBe(1)
      expect(todo.title)
        .toBe('Modified title')
    })()

    Hierarchy of Filters

    Testing

    In the tests, you need to first in indicate which name space you are testing, and then precise which method. Both using describe. The entity tested is the first argument of describe. The second argument of describe, is a function. In the function, you need another function called: it. This function also takes two arguments. The first is a string that is usefull only for future developpers (does nothing in the code) saying what result we expect from our test, and the second is once again a method, ending with an assert this time. This last method is the test.

    The hook beforeEach executes before every test.

    Visit original content creator repository https://github.com/taktik/typescript-http-client
  • n-asset-macro

    Webrouse/n-asset-macro

    Build Status Scrutinizer Code Quality Code Coverage Latest stable Donate

    Asset macro for Latte and Nette Framework.

    Useful for assets cache busting with gulp, webpack and other similar tools.

    Requirements

    Nette 3 is fully supported and tested.

    Installation

    The best way to install webrouse/n-asset-macro is using Composer:

    $ composer require webrouse/n-asset-macro

    Then register the extension in the config file:

    # app/config/config.neon
    extensions:
        assetMacro: Webrouse\AssetMacro\DI\Extension

    Usage

    Macro can by used in any presenter or control template:

    {* app/presenters/templates/@layout.latte *}
    <script src="{asset resources/vendor.js}"></script>
    <script src="{asset //resources/main.js}"></script>

    It prepends path with $basePath or $baseUrl (see absolute) and loads revision from the revision manifest:

    <script src="/base/path/resources/vendor.d78da025b7.js"></script>
    <script src="http://www.example.com/base/path/resources/main.34edebe2a2.js"></script>

    See the examples for usage with gulp, webpack.

    Revision manifest

    Revision manifest is a JSON file that contains the revision (path or version) of asset.

    It can be generated by various asset processors such as gulp and webpack, see examples.

    Revision manifest is searched in the asset directory and in the parent directories up to %wwwDir%.

    Expected file names: assets.json, busters.json, versions.json, manifest.json, rev-manifest.json.

    The path to revision manifest can be set directly (instead of autodetection):

    # app/config/config.neon
    assetMacro:
        manifest: %wwwDir%/assets.json

    Or you can specify asset => revision pairs in config file:

    # app/config/config.neon
    assetMacro:
        manifest:
          'js/vendor.js': 16016edc74d  # or js/vendor.16016edc74d.js
          'js/main.js':  4b82916016    # or js/main.4b82916016.js

    Revision manifest may contains asset version or the asset path. Both ways are supported.

    Method 1: asset version in file name (preferable)

    With this method, the files have a different name at each change.

    Example revision manifest:

    {
    	"js/app.js": "js/app.234a81ab33.js",
    	"js/vendor.js": "js/vendor.d67fbce193.js",
    	"js/locales/en.js": "js/locales/en.d78da025b7.js",
    	"js/locales/sk.js": "js/locales/sk.34edebe2a2.js",
    	"css/app.css": "css/app.04b5ff0b97.js"
    }

    With the example manifest, the expr. {asset "js/app.js"} generates: /base/path/js/app.234a81ab33.js.

    Method 2: asset version as a query string

    This approach looks better at first glance. The asset path is still the same, and only the parameter in the query changes.

    However, it can cause problems with some cache servers, which don’t take the URL parameters into account.

    Example revision manifest:

    {
    	"js/app.js": "234a81ab33",
    	"js/vendor.js": "d67fbce193",
    	"js/locales/en.js": "d78da025b7",
    	"js/locales/sk.js": "34edebe2a2",
    	"css/app.css": "04b5ff0b97"
    }

    With the example manifest, the expr. {asset "js/app.js"} generates: /base/path/js/app.js?v=234a81ab33.

    Asset macro automatically detects which of these two formats of revision manifest is used.

    Macro arguments

    format

    The format is defined by the second macro parameter or using the format key (default %url%).

    format can be used with needed => false to hide whole asset expression (eg. <link ...) in case of an error.

    You can also use it to include asset content instead of a path.

    Placeholder Example output
    %content% <svg>....</svg> (file content)
    %path% js/main.js or js/main.8c48f58df.js
    %raw% 8c48f58df or js/main.8c48f58df.js
    %base% %baseUrl% if absolute => true else %basePath%
    %basePath% /base/path
    %baseUrl% http://www.example.com/base/path
    %url% %base%%path% (default format) eg. /base/path/js/main.8c48f58df.js
    {* app/presenters/templates/@layout.latte *}
    {asset 'js/vendor.js', '<script src="https://github.com/michaljurecko/%url%"></script>'}
    <script src="{asset 'js/livereload.js', format => '%path%?host=localhost&v=%raw%'}"></script>

    needed

    Error handling is set in the configuration using: missingAsset, missingManifest and missingRevision keys.

    These settings can by overrided by third macro parameter or using needed key (default true).

    Argument needed => false will cause the missing file or the missing revision record will be ignored.

    Missing version will be replaced with unknown string.

    Example of needed parameter

    • absent.js file doesn’t exist.
    • missing_rev.js exists but doesn’t have revision in manifest (or the manifest has not been found).
    {asset 'js/absent.js', '<script src="https://github.com/michaljurecko/%url%"></script>', FALSE}
    {asset 'js/missing_rev.js', format => '<script src="https://github.com/michaljurecko/%url%"></script>', needed => FALSE}

    Generated output:

    <script src="/base/path/js/missing_rev.js?v=unknown"></script>

    absolute

    Output URL type – relative or absolute – is defined by fourth macro parameter or using absolute key (default false).

    If absolute => true or asset path is prefixed with // eg. (//assets/js/main.js), the absolute URL will be generated instead of a relative URL.

    {asset 'js/vendor.js'}      {* equal to {asset 'js/vendor.js', absolute => false} *}
    {asset '//js/vendor.js'}    {* equal to {asset 'js/vendor.js', absolute => true}  *}
    

    Generated output:

    <script src="/base/path/js/vendor.d67fbce193.js"></script>
    <script src="http://www.example.com/base/path/js/vendor.d67fbce193.js"></script>

    Caching

    In production mode is the macro output cached in default application’s cache storage.

    It can be changed in the configuration using the boolean cache key.

    Configuration

    Default configuration, which usually doesn’t need to be changed:

    # app/config/config.neon
    assetMacro:
        # Cache generated output
        cache: %productionMode%
        # Path to revision manifest or asset => revision pairs,
        # if set, the autodetection is switched off
        manifest: null # %wwwDir%/assets/manifest.json
        # File names for automatic detection of revision manifest
        autodetect:
            - assets.json
            - busters.json
            - versions.json
            - manifest.json
            - rev-manifest.json
        # Absolute path to assets dir
        assetsPath: %wwwDir%/ # %wwwDir%/assets
        # Public path to "assetsPath"
        publicPath: / # /assets
        # Action if missing asset file: exception, notice, or ignore
        missingAsset: notice
        # Action if missing manifest file: exception, notice, or ignore
        missingManifest: notice
        # Action if missing asset revision in manifest: exception, notice, or ignore
        missingRevision: notice
        # Default format, can be changed in macro using "format => ..."
        format: '%%url%%' # character % is escaped by %%

    ManifestService

    It is also possible to access the manifest from your code using Webrouse\AssetMacro\ManifestService (from DI container).

    /** @var ManifestService $manifestService */
    $cssAssets = $manifestService->getManifest()->getAll('/.*\.css$/');

    Examples

    Examples based on nette/sandbox:

    License

    N-asset-macro is under the MIT license. See the LICENSE file for details.

    Visit original content creator repository https://github.com/michaljurecko/n-asset-macro
  • n-asset-macro

    Webrouse/n-asset-macro

    Build Status
    Scrutinizer Code Quality
    Code Coverage
    Latest stable
    Donate

    Asset macro for Latte and Nette Framework.

    Useful for assets cache busting
    with gulp, webpack and other similar tools.

    Requirements

    Nette 3 is fully supported and tested.

    Installation

    The best way to install webrouse/n-asset-macro is using Composer:

    $ composer require webrouse/n-asset-macro

    Then register the extension in the config file:

    # app/config/config.neon
    extensions:
        assetMacro: Webrouse\AssetMacro\DI\Extension

    Usage

    Macro can by used in any presenter or control template:

    {* app/presenters/templates/@layout.latte *}
    <script src="{asset resources/vendor.js}"></script>
    <script src="{asset //resources/main.js}"></script>

    It prepends path with $basePath or $baseUrl (see absolute) and loads revision from the revision manifest:

    <script src="/base/path/resources/vendor.d78da025b7.js"></script>
    <script src="http://www.example.com/base/path/resources/main.34edebe2a2.js"></script>

    See the examples for usage with gulp, webpack.

    Revision manifest

    Revision manifest is a JSON file that contains the revision (path or version) of asset.

    It can be generated by various asset processors such as gulp and webpack, see examples.

    Revision manifest is searched in the asset directory and in the parent directories up to %wwwDir%.

    Expected file names: assets.json, busters.json, versions.json, manifest.json, rev-manifest.json.

    The path to revision manifest can be set directly (instead of autodetection):

    # app/config/config.neon
    assetMacro:
        manifest: %wwwDir%/assets.json

    Or you can specify asset => revision pairs in config file:

    # app/config/config.neon
    assetMacro:
        manifest:
          'js/vendor.js': 16016edc74d  # or js/vendor.16016edc74d.js
          'js/main.js':  4b82916016    # or js/main.4b82916016.js

    Revision manifest may contains asset version or the asset path. Both ways are supported.

    Method 1: asset version in file name (preferable)

    With this method, the files have a different name at each change.

    Example revision manifest:

    {
    	"js/app.js": "js/app.234a81ab33.js",
    	"js/vendor.js": "js/vendor.d67fbce193.js",
    	"js/locales/en.js": "js/locales/en.d78da025b7.js",
    	"js/locales/sk.js": "js/locales/sk.34edebe2a2.js",
    	"css/app.css": "css/app.04b5ff0b97.js"
    }

    With the example manifest, the expr. {asset "js/app.js"} generates: /base/path/js/app.234a81ab33.js.

    Method 2: asset version as a query string

    This approach looks better at first glance. The asset path is still the same, and only the parameter in the query changes.

    However, it can cause problems with some cache servers, which don’t take the URL parameters into account.

    Example revision manifest:

    {
    	"js/app.js": "234a81ab33",
    	"js/vendor.js": "d67fbce193",
    	"js/locales/en.js": "d78da025b7",
    	"js/locales/sk.js": "34edebe2a2",
    	"css/app.css": "04b5ff0b97"
    }

    With the example manifest, the expr. {asset "js/app.js"} generates: /base/path/js/app.js?v=234a81ab33.

    Asset macro automatically detects which of these two formats of revision manifest is used.

    Macro arguments

    format

    The format is defined by the second macro parameter or using the format key (default %url%).

    format can be used with needed => false to hide whole asset expression (eg. <link ...) in case of an error.

    You can also use it to include asset content instead of a path.

    Placeholder Example output
    %content% <svg>....</svg> (file content)
    %path% js/main.js or js/main.8c48f58df.js
    %raw% 8c48f58df or js/main.8c48f58df.js
    %base% %baseUrl% if absolute => true else %basePath%
    %basePath% /base/path
    %baseUrl% http://www.example.com/base/path
    %url% %base%%path% (default format) eg. /base/path/js/main.8c48f58df.js

    {* app/presenters/templates/@layout.latte *}
    {asset 'js/vendor.js', '<script src="https://github.com/michaljurecko/%url%"></script>'}
    <script src="{asset 'js/livereload.js', format => '%path%?host=localhost&v=%raw%'}"></script>

    needed

    Error handling is set in the configuration using: missingAsset, missingManifest and missingRevision keys.

    These settings can by overrided by third macro parameter or using needed key (default true).

    Argument needed => false will cause the missing file or the missing revision record will be ignored.

    Missing version will be replaced with unknown string.

    Example of needed parameter

    • absent.js file doesn’t exist.
    • missing_rev.js exists but doesn’t have revision in manifest (or the manifest has not been found).

    {asset 'js/absent.js', '<script src="https://github.com/michaljurecko/%url%"></script>', FALSE}
    {asset 'js/missing_rev.js', format => '<script src="https://github.com/michaljurecko/%url%"></script>', needed => FALSE}

    Generated output:

    <script src="/base/path/js/missing_rev.js?v=unknown"></script>

    absolute

    Output URL type – relative or absolute – is defined by fourth macro parameter or using absolute key (default false).

    If absolute => true or asset path is prefixed with // eg. (//assets/js/main.js), the absolute URL will be generated instead of a relative URL.

    {asset 'js/vendor.js'}      {* equal to {asset 'js/vendor.js', absolute => false} *}
    {asset '//js/vendor.js'}    {* equal to {asset 'js/vendor.js', absolute => true}  *}
    

    Generated output:

    <script src="/base/path/js/vendor.d67fbce193.js"></script>
    <script src="http://www.example.com/base/path/js/vendor.d67fbce193.js"></script>

    Caching

    In production mode is the macro output cached in default application’s cache storage.

    It can be changed in the configuration using the boolean cache key.

    Configuration

    Default configuration, which usually doesn’t need to be changed:

    # app/config/config.neon
    assetMacro:
        # Cache generated output
        cache: %productionMode%
        # Path to revision manifest or asset => revision pairs,
        # if set, the autodetection is switched off
        manifest: null # %wwwDir%/assets/manifest.json
        # File names for automatic detection of revision manifest
        autodetect:
            - assets.json
            - busters.json
            - versions.json
            - manifest.json
            - rev-manifest.json
        # Absolute path to assets dir
        assetsPath: %wwwDir%/ # %wwwDir%/assets
        # Public path to "assetsPath"
        publicPath: / # /assets
        # Action if missing asset file: exception, notice, or ignore
        missingAsset: notice
        # Action if missing manifest file: exception, notice, or ignore
        missingManifest: notice
        # Action if missing asset revision in manifest: exception, notice, or ignore
        missingRevision: notice
        # Default format, can be changed in macro using "format => ..."
        format: '%%url%%' # character % is escaped by %%

    ManifestService

    It is also possible to access the manifest from your code using Webrouse\AssetMacro\ManifestService (from DI container).

    /** @var ManifestService $manifestService */
    $cssAssets = $manifestService->getManifest()->getAll('/.*\.css$/');

    Examples

    Examples based on nette/sandbox:

    License

    N-asset-macro is under the MIT license. See the LICENSE file for details.

    Visit original content creator repository
    https://github.com/michaljurecko/n-asset-macro

  • tf-k3s

    Terraform Modules for K3s

    Provisions K3s nodes and is able to build a cluster from multiple nodes.

    You can use the k3s module to template the necessary cloudinit files for creating a K3s cluster node.
    Modules for OpenStack and Hetzner hcloud that bundle all necessary resources are available.

    Supported Cloud Providers

    • OpenStack
    • Hetzner Cloud (hcloud)

    Modules

    k3s

    This module provides the templating of the user_data for use with cloud-init.

    module "k3s_server" {
      source = "git::https://github.com/nimbolus/tf-k3s.git//k3s"
    
      name          = "k3s-server"
      cluster_token = "abcdef"
      k3s_ip        = "10.11.12.13"
      k3s_args = [
        "server",
        "--disable", "traefik",
        "--node-label", "az=ex1",
      ]
    }
    
    output "server_user_data" {
      value     = module.k3s_server.user_data
      sensitive = true
    }

    k3s-openstack

    With this module a single K3s node can be deployed with OpenStack. It internally uses the k3s module. Depending on the supplied parameters the node will initialize a new cluster or join an existing cluster as a server or agent.

    module "server" {
      source = "git::https://github.com/nimbolus/tf-k3s.git//k3s-openstack"
    
      name               = "k3s-server"
      image_name         = "ubuntu-20.04"
      flavor_name        = "m1.small"
      availability_zone  = "ex"
      keypair_name       = "keypair"
      network_id         = var.network_id
      subnet_id          = var.subnet_id
      security_group_ids = [module.secgroup.id]
    
      cluster_token = "abcdef"
      k3s_args = [
        "server",
        "--disable", "traefik",
        "--node-label", "az=ex1",
        # if using bootstrap-auth include
        "--kube-apiserver-arg", "enable-bootstrap-token-auth",
      ]
      bootstrap_token_id     = "012345"
      bootstrap_token_secret = "0123456789abcdef"
    }

    k3s-openstack/security-group

    The necessary security-group for the K3s cluster can be deployed with this module.

    module "secgroup" {
      source = "git::https://github.com/nimbolus/tf-k3s.git//k3s-openstack/security-group"
    }

    k3s-hcloud

    With this module a single K3s node can be deployed with hcloud. It internally uses the k3s module. Depending on the supplied parameters the node will initialize a new cluster or join an existing cluster as a server or agent.

    module "server" {
      source = "git::https://github.com/nimbolus/tf-k3s.git//k3s-hcloud"
    
      name          = "k3s-server"
      keypair_name  = "keypair"
      network_id    = var.network_id
      network_range = var.ip_range
    
      cluster_token = "abcdef"
      k3s_args = [
        "server",
        "--disable", "traefik",
        "--node-label", "az=ex1",
        # if using bootstrap-auth include
        "--kube-apiserver-arg", "enable-bootstrap-token-auth",
      ]
      bootstrap_token_id     = "012345"
      bootstrap_token_secret = "0123456789abcdef"
    }

    bootstrap-auth

    To access the cluster an optional bootstrap token can be installed on the cluster. To install the token specify the parameters bootstrap_token_id and bootstrap_token_secret on the server that initializes the cluster.
    For ease of use the provider nimbolus/k8sbootstrap can be used to retrieve the CA certificate from the cluster. The provider can also output a kubeconfig with the bootstrap token.

    data "k8sbootstrap_auth" "auth" {
      // depends_on = [module.secgroup] // if using OpenStack
      server = module.server1.k3s_external_url
      token  = local.token
    }

    Examples

    • basic: basic usage of the k3s module with one server and one agent node
    • ha-hcloud: 3 Servers and 1 Agent with bootstrap token on Hetzner Cloud
    • ha-openstack: 3 Servers and 1 Agent with bootstrap token on OpenStack

    Tests

    Basic

    cd tests/basic
    go test -count=1 -v

    OpenStack

    cd tests/ha-openstack
    cp env.sample .env
    $EDITOR .env
    source .env
    go test -count=1 -v

    hcloud

    cd tests/ha-hcloud
    cp env.sample .env
    $EDITOR .env
    source .env
    go test -count=1 -v

    Visit original content creator repository
    https://github.com/nimbolus/tf-k3s

  • podcast-hunt

    Podcast Hunt

    A clone of Product Hunt web built with MongoDB, Express and React.

    Available Scripts

    In the project directory, you can run:

    yarn start

    Runs the app in the development mode.
    Open http://localhost:3000 to view it in the browser.

    yarn test

    Launches the test runner in the interactive watch mode.\

    yarn build

    Builds the app for production to the build folder.
    It correctly bundles React in production mode and optimizes the build for the best performance.

    The build is minified and the filenames include the hashes.
    Your app is ready to be deployed!

    yarn eject

    Note: this is a one-way operation. Once you eject, you can’t go back!

    If you aren’t satisfied with the build tool and configuration choices, you can eject at any time. This command will remove the single build dependency from your project.

    Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.

    You don’t have to ever use eject. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.

    Visit original content creator repository
    https://github.com/alexdevero/podcast-hunt

  • sapiens

    📚 Projet de Visualisation de données: “Sapiens – Une brève histoire de l’humanité” 🌍

    🔍 Contexte

    Jonathan a été profondément marqué par la lecture de Sapiens – Une brève histoire de l’humanité de Yuval Noah Harari. Ce livre fascinant, publié en 2011, retrace l’évolution humaine depuis le paléolithique (il y a environ 2,6 millions d’années) jusqu’à notre ère contemporaine. Passionné par cette lecture, Jonathan a méticuleusement pris des notes détaillées qui serviront de base solide pour notre projet.

    📖 Les notes sont disponibles ici :

    🛠️ Description

    Notre démarche consiste à créer notre propre dataset au format JSON. Un exemple est disponible dans le fichier sapiens.json

    Type de données : Qualitatives ordinales.

    💻 Nous extrairons des données significatives du résumé de Jonathan – des informations simples mais essentielles pour comprendre l’histoire fascinante de notre espèce.

    🎯 Objectif

    Notre visualisation vise à expliquer et rendre accessible la riche histoire humaine présentée dans Sapiens. Plutôt que de rechercher des tendances ou d’explorer de nouvelles données, nous souhaitons organiser et synthétiser les moments clés du livre pour les présenter de manière claire et compréhensible.

    Face à l’abondance d’informations contenues dans l’ouvrage, nous avons décidé de les structurer visuellement pour une meilleure compréhension.

    🧠 Notre défi sera de maintenir un fil narratif cohérent tout en mettant en lumière les étapes cruciales qui ont façonné l’humanité. ⏳

    📊 Sources & Référence

    Nos données sont uniques, directement extraites du livre et soigneusement formatées en JSON par notre équipe. Ce format nous permettra de développer une visualisation efficace et percutante.

    Inspiration :

    📝 Wireframe

    Nous avons réalisé nos maquettes sur Figma en nous efforçant de créer des modèles haute fidélité, intégrant déjà les textes définitifs ainsi qu’un prototype fonctionnel (à l’exception de quelques fonctionnalités, comme l’effet de défilement). Voici les liens pour accéder à nos maquettes :

    Consignes du projet: GitHub COMEM-VisualDon

    Visit original content creator repository
    https://github.com/K-sel/sapiens

  • SSI2017

    SSI2017

    Adresses IP des machines

    • NETFILTER : 192.168.49.253
    • Réseau host only : VMNET4
    • Réseau bridge : VMNET2

    Initialisation de la machine virtuelle

    Faire un snapshot de l’état initial de la machine

    $addgroup debian sudo

    Déloger / loger

    $sudo apt-get install wireshark

    Choisir “Yes” pour utilisateur normal

        $sudo addgroup debian wireshark
        $sudo apt-get install openconnect
        $sudo apt-get install freerdp-x11
        $sudo openconnect dcloud-lon-anyconnect.cisco.com
        $sudo routel
    
    

    Dans une autre console :

        $ping  198.19.10.200
        $xfreerdp /u:administrator /p:C1sco12345 /v:198.19.10.1
    
    

    Présentation générale de l’infrastructure

    • Plan du réseau
    • Différents éléments

    Liste des machines

    Machines virtuelles présentes sur le réseau :

    • Client : machine client pour réaliser les tests
    • IPTable : firewall IPTable
    • ProxySyslog : machine “à tout faire” utilisée comme proxy et pour la gestion des logs (+ éventuellement DHCP et DNS)
    • Pfsense : Firewall PFSense
    • Onion : machine embarquant security onion

    Machines de base utilisées pour monter les machines du réseau :

    • DebianSSI-IMIE2017 : machine de référence utilisée en linked clone par Client + utilisée pour faire les tests dcloud
    • DebianSSI-IMIE-NoX : machine de référence utilisée en linkedclone par IPtable et ProxySyslog

    Réseau

    • Création d’un réseau Custom : vmnet4 / 192.168.49.0/24
    • Les machines IPTable et Pfsense ont une interface sur vmnet4 et une interface bridge sur le réseau externe
    • Les autres machines ont une interface sur vmnet4
    • Attention aux adresses réservées par VMWARE (gateway, host, dhcp, .1, .2, .254)

    Le réseau interne (VMNET4) est en 192.168.49.0/24.

    La plage DHCP est en 192.168.49.16/28

    • Address: 192.168.49.16
    • Netmask: 255.255.255.240 = 28
    • Wildcard: 0.0.0.15
    • Network: 192.168.49.16/28
    • Broadcast: 192.168.49.31
    • HostMin: 192.168.49.17
    • HostMax: 192.168.49.30
    • Hosts/Net: 14

    Les machines serveurs (IP fixes) sont dans le 192.168.49.224/27

    • Address: 192.168.49.224
    • Netmask: 255.255.255.224 = 27
    • Wildcard: 0.0.0.31
    • Broadcast: 192.168.49.255
    • HostMin: 192.168.49.225
    • HostMax: 192.168.49.254
    • Hosts/Net: 30

    Cf : http://jodies.de/ipcalc

    Firewall NETFILTER

    Il comporte 3 modules principaux :

    • IPRoute2 : embarque les fonctions de routage
    • Netfilter/IPtable : le firewall à proprement parler
    • L7Filter : les fonctions de filtrage aux niveaux applicatifs

    Les 4 modules interagissent dans une machine Linux pour produire un firewall relativement complet.

    Configuration du firewall NETFILTER

    Etape 1 : poser la machine sur le réseau

        $sudo addgroup <useraccount> sudo
        $apt-get install vim
        $vim /etc/network/interfaces
        allow-hotplug eth0
        iface eth0 inet static
            address 192.168.49.253
            netmask 255.255.255.0
        $cat /etc/resolv.conf
        $vim /etc/hostname
        NETFILTER
        $vim /etc/hosts
        127.0.1.1 NETFILTER
        $sudo service networking restart
    

    Tester ensuite le ping vers l’extérieur et le ping depuis le client se trouvant sur le réseau host only.
    Les deux doivent être OK.

    Etape 2 : activer le routage, mettre en place le NAT

    Activer le routage :

      $sudo vim /etc/sysctl.conf
      net.ipv4.ip_forward=1
    

    Activer le NAT

    • eth1 : interface WAN
    • eth0 : interface LAN

    $sudo iptables -A FORWARD -i eth0 -j ACCEPT
    $sudo iptables -A FORWARD -o eth0 -j ACCEPT
    $sudo iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
    

    Vérification

    • Pinger depuis la station présente sur le réseau interne vers une IP externe
    • Lister les règles sur le firewal

    $sudo iptables -L -v
    $sudo iptables -t nat -L -v
    
    

    Etape 3 : protéger le firewall

    Bloquer l’ensemble du trafic destiné au firewall, sauf SSH et PING

    $sudo iptables -t filter -A OUTPUT -p icmp -j ACCEPT           
    $sudo iptables -t filter -A INPUT -p icmp -j ACCEPT                       
    $sudo iptables -t filter -A OUTPUT -p TCP --sport 22 -j ACCEPT            
    $sudo iptables -t filter -A INPUT -p TCP --dport 22 -j ACCEPT
    $sudo iptables -A OUTPUT -o lo -j ACCEPT
    $sudo iptables -A INPUT -i lo -j ACCEPT
    $sudo iptables -t filter -P INPUT DROP
    $sudo iptables -t filter -P OUTPUT DROP
    $sudo iptables -t filter -P FORWARD DROP
    

    Firewall PFSENSE

    • IP LAN : 192.168.49.252
    • Plage DHCP : 192.168.49.16/28

    Définir la machine

    • FreeBSD64
    • 2 interfaces réseau, une bridge, l’autre host only
    • 2 cores
    • 2048Mo de mémoire
    • Boot CDROM sur l’iso de Pfsense

    Poser le système sur la machine

    Booter la VM sur l’ISO d’installation

    1. Boot multi user : enter
    2. I
    3. Change keymap to FR/ISO => accept settings
    4. Quick and easy install : enter
    5. Enter (standard)
    6. Reboot

    Booter sur le disque de la machine + configurer

    1. Enter
    2. Vérifier l’attribution des interfaces réseau LAN et WAN
    3. Si interfaces réseau mal attribuées, choix 1) puis réattribuer les interfaces
    4. Choix 2) enter
    5. Laisser l’interface WAN en DHCP
    6. Configurer l’interface LAN :
    • IP : 192.168.49.252
    • Mask : 24
    • Enter (pas de gateway)
    • Enter (pas d’IPV6)
    • Enable DHCP : y
    • DHCP : 192.168.49.17 / 192.168.49.31
    • Revert to http : si y => choix entre http et https, si n => https uniquement : n

    Fin de la configuration réseau minimale, ouvrir un browser sur le réseau host only et poursuivre en allant à l’adresse : https://192.168.49.252

    • Login : admin
    • Password : pfsense

    Configurer le firewall depuis l’interface Web

    • Hostname : PFSENSE
    • Domain : local
    • Primary DNS Server : vide (conf DHCP WAN)
    • Secondary DNS Server : vide

    Ecran suivant

    • NTP server: inchangé (on reste sur celui de pfsense pour la démonstration)
    • Time zone : Paris

    Ecran suivant

    • WAN : on ne change rien

    Ecran suivant

    • LAN : on ne change rien

    Ecran suivant

    • cliquer sur le lien pour accéder à la configuration Web => affichage d’un dashboard system

    On arrête la machine et on fait un snapshot en préparation des étapes suivantes.

    Gestion des flux

    Contrôle de la configuration de base

    • Désactiver le DHCP intégré à l’outil de virtualisation (VMWARE)
    • Démarrer le PFSENSE
    • Démarrer un client sur le réseau host-only
    • Mettre le client en IP dynamique
      • Supprimer la configuration de eth0 dans /etc/network/interfaces
      • reboot (ou redémarrage ntwk et ntwk manager)
      • /sbin/ifconfig => ok lease dhcp 192.168.49.17
      • ping adresse IP externe => ok
      • traceroute adresse IP externe => le flux passe bien par le pfsense

    Chiffres

    MD5SUM

    Pour vérifier l’authenticité d’un fichier, la commande MD5SUM génère un hash du fichier. Il me semble que MD5 n’est pas sûr et qu’il peut y avoir des collisions.

    Exemple :

    $wget http://ftp.us.debian.org/debian/pool/main/o/openssh/openssh-server_7.4p1-7_i386.deb
    $md5sum openssh-server_7.4p1-7_i386.deb
    

    Comparer la valeur à la valeur affichée par debian sur son site

    Authentification SSH

    $ssh-keygen -t rsa -b 4096

    Visit original content creator repository
    https://github.com/mdautrey/SSI2017