Blog

  • moonsideProductions

    I am no longer activly maintaing this theme, I will try to check out pull requests if possible

    hugo-uno

    A responsive hugo theme with awesome font’s, charts and light-box galleries, the theme is based on Uno for ghost.
    A example site is available at hugouno.fredrikloch.me

    A Swedish translation is available in the branch feature/swedish

    Usage

    The following is a short tutorial on the usage of some features in the theme.
    Configuration

    To take full advantage of the features in this theme you can add variables to your site config file, the following is the example config from the example site:

    languageCode = "en-us"
    contentdir = "content"
    publishdir = "public"
    builddrafts = false
    baseurl = "http://fredrikloch.me/"
    canonifyurls = true
    title = "Fredrik Loch"
    author = "Fredrik Loch"
    copyright = "This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License."
    
    
    [indexes]
       category = "categories"
       tag = "tags"
    [Params]
      AuthorName = "Fredrik"
      github = "Senjindarashiva"
      bitbucket = "floch"
      flickr = "senjin"
      twitter = "senjindarshiva"
      email = "mail@fredrikloch.me"
      description = ""
      cv = "/pages/cv"
      legalnotice = "/pages/legal-notice"
      muut = "fredrikloch"
      linkedin = "fredrikloch"
      cover = "/images/background-cover.jpg"
      logo = "/img/logo-1.jpg"
    

    If you prefer to use discourse replace the “muut” line with the following(remember the trailing slash)

      discourse = "http://discourse.yoursite.com/"
    

    If you prefer to use disqus replace the “muut” line with the following

      disqus = "disqusUsername"
    

    Charts

    To create charts I use Chart.js which can be configured through basic js files. To add a chart to a post use the following short-code:

    {{% chart id="basicChart" width=860 height=400 js="../../js/chartData.js" %}}
    

    Where the javascript file specified contains the data for the chart, a basic example could look like this:

    
    $(function(){
      var chartData = {
          labels: ["Jekyll", "Hugo", "Wintersmith"],
          datasets: [
              {
                  label: "Mean build time",
                  fillColor: "#E1EDD7",
                  strokeColor: "#E1EDD7",
                  highlightFill: "#C1D8AB",
                  highlightStroke: "#C1D8AB",
                  data: [784, 100, 5255]
              }
          ]
      };
    
      var ctx = $('#basicChart').get(0).getContext("2d");
      var myBarChart = new Chart(ctx).Bar(chartData, {
          scaleBeginAtZero : true,
          responsive: true,
          maintainAspectRatio: false,
        }
      );
      })
    

    A running example can be found in my comparison between Jekyll, Hugo and Winthersmith
    Gallery

    To add a gallery to the site we use basic html together with lightGallery to create a responsive light-box gallery.

    <ul style="list-style: none;" id="lightGallery">
        <li data-src="https://github.com/RMBLRX/pathToImg.jpg">
            <img src="pathToThumb.jpg"></img>
        </li>
        <li data-src="https://github.com/RMBLRX/pathToImg.jpg">
            <img src="pathToThumb.jpg"></img>
        </li>
    </ul>
    
    <script src=../../js/lightGallery.min.js></script>
    <script>
        $("#lightGallery").lightGallery();
    </script>
    

    Features

    Cover page
    The landing page for Hugo-Uno is a full screen ‘cover’ featuring your avatar, blog title, mini-bio and cover image.

    Built with SASS, using BEM
    If you know HTML and CSS making modifications to the theme should be super simple.

    Responsive
    Hugo-Uno looks great on all devices, even those weird phablets that nobody buys.

    Moot comments
    Moot integration allows users to comment on your posts.

    Font-awesome icons
    For more information on available icons: font-awesome

    No-JS fallback
    While JS is widely used, some themes and websites don’t provide fallback for when no JS is available (I’m looking at you Squarespace). If for some weird reason a visitor has JS disabled your blog will still be usable.

    License

    Creative Commons Attribution 4.0 International

    Development

    In order to develop or make changes to the theme you will need to have the sass compiler and bourbon both installed.

    To check installation run the following commands from a terminal and you should see the > cli output but your version numbers may vary.

    ** SASS **

    sass -v
    > Sass 3.3.4 (Maptastic Maple)

    If for some reason SASS isn’t installed then either follow the instructions from the Sass install page or run bundle install in the project root.

    ** Bourbon **

    bourbon help
    > Bourbon 3.1.8

    If Bourbon isn’t installed follow the installation instructions on the Bourbon website or run bundle install in the project root.

    Once installation is verified we will need to go mount the bourbon mixins into the scss folder.

    From the project root run bourbon install with the correct path

    bourbon install --path static/scss
    > bourbon files installed to static/scss/bourbon/

    Now that we have the bourbon mixins inside of the scss src folder we can now use the sass cli command to watch the scss files for changes and recompile them.

    sass --watch static/scss:static/css
    >>>> Sass is watching for changes. Press Ctrl-C to stop.

    To minify the css files use the following command in the static folder

    curl -X POST -s --data-urlencode 'input@static/css/uno.css' http://cssminifier.com/raw > static/css/uno.min.css

    Visit original content creator repository
    https://github.com/RMBLRX/moonsideProductions

  • LCL

    Login Controll for Oracle

    Oracle SQL and PL/SQL solution to controll logins

    Why?

    I have two reasons:

    1. to refuse the unauthorized logins, and
    2. log the attempts

    How?

    There is a logon trigger which checks the

    • Oracle user
    • OS user
    • IP address of the client
    • Program / Application

    If the login allowed then goes on, but if it did not, then logs the data and raise an error.

    For DBA roled users the login is allowed all the time despite the trigger is invalid nor raises an error.

    There is a table to controll the logins:

      ORACLE_USER             VARCHAR2 (   400 )
      OS_USER                 VARCHAR2 (   400 )
      IP_ADDRESS              VARCHAR2 (   400 )
      PROGRAM                 VARCHAR2 (   400 )
      ENABLED                 CHAR     (     1 )     Y or N

    This table contains the valid user/client/program combinations.
    The column values will use with LIKE, so it can be pattern.
    i.e. “%” means “every” user/IP address/program e.t.c.
    But ‘%’,’%’,’%’,’%’,’Y’ means anybody from anywhere, and this overwrites any other rules!
    The refused logon data will be logged into LCL_LOG table.
    There is an ENABLED column in the LCL_TABLE too, so you can disabled the logins anytime to set this value to “N”.

    The whole solution is not too complicated, so see the install script file for more details!

    Visit original content creator repository
    https://github.com/frankiechapson/LCL

  • QRealTime

    Welcome to QRealTime Plugin

    flowchart

    QRealTime Plugin allows you to:

    • Create new survey form directly from GIS layers in QGIS
    • Synchronise data from ODK Aggregate, KoboToobox, and ODK Central servers
    • Import data from server

    Getting Started

    Installation

    Prerequisites:
    • QGIS installed

    Installation steps:

    1. Open Plugin Manager and search for QRealTime plugin and install it.
    2. And restart QGIS so that changes in environment take effect.

    Configuration:

    From the main menu choose Plugins –> QRealTime –> QRealTime Setting
    Here you have three tabs one for Aggregate, KoboToolBox, and Central Choose one of the tabs and Enter url (required).
    For Kobo server url can be:
    https://kobo.humanitarianresponse.info/ or https://kf.kobotoolbox.org/ for humanitarian and researcher account respectively
    Other fields are optional.
    You can create a free account in KoboToolbox here
    You can set up ODK Central here
    QRealTimePic

    Using the Plugin:


    Right click over any existing layer –> QRealTime and choose desired option:
    Make Online (to create new form), import (to import data of existing form), sync (to automatically update your layer)

    options


    QRealTime plugin is capable of converting QGIS layer into data collection form. To design a data collection form for humanitarian crisis, we have to create appropriate vector layer. For the demonstration purpose, you can create the shapefile with following fields:

    tables

    Resources:


    If you are not sure how to create value map in QGIS, visit this link
    For a tutorial on how to use the QRealTime Plugin, check out this video :
    Visit original content creator repository https://github.com/shivareddyiirs/QRealTime
  • auction-events

    auction-events

    Hyperledger Fabric sample Using Event Handling with the next generation IBM Blockchain Platform

    This code pattern demonstrates leveraging the event handling feature within an application that is based on using an IKS cluster with IBM Blockchain Platform V2.0 service on IBM Cloud. We apply this use case to an auction use case. It shows how events can be emitted by using the Hyperledger Fabric SDK and subscribed to by external applications. The application is implemented in Node.js and is using the Hyperledger Fabric SDK for node.js to connect with the network, set up an event listener and catch transactional events.

    A client application may use the Fabric Node.js client to register a “listener” to receive blocks as they are added to the channel ledger. This is known as “channel-based events”, and it allows a client to start to receive blocks from a specific block number, allowing event processing to run normally on blocks that may have been missed. The Fabric Node.js client can also assist client applications by processing the incoming blocks and looking for specific transactions or chaincode events. This allows a client application to be notified of transaction completion or arbitrary chaincode events without having to perform multiple queries or search through the blocks as they are received. After the transaction proposal has been successfully endorsed, and before the transaction message has been successfully broadcasted to the orderer, the application should register a listener to be notified of the event when the transaction achieves finality, which is when the block containing the transaction gets added to the peer’s ledger/blockchain.

    Fabric committing peers provides an event stream to publish blocks to registered listeners. A Block gets published whenever the committing peer adds a validated block to the ledger. There are three ways to register a listener to get notified:

    • register a block listener to get called for every block event. The listener will be passed a fully decoded Block object.

    • register a transaction listener to get called when the specific transaction by id is committed (discovered inside a published block). The listener will be passed the transaction id, transaction status and block number.

    • register a chaincode event listener to get called when a specific chaincode event has arrived. The listener is be passed the ChaincodeEvent, block number, transaction id, and transaction status.

    In this pattern we are registering a transaction event. So when a transaction is completed/committed – an event will get triggered and the application will catch it and report it.

    Audience level : Intermediate Developers

    If you have an IBM Cloud Lite account, you can also use the IBM Blockchain Platform Service free for 30 days to do this pattern. Additionally, IKS is free too.

    When you have completed this code pattern, you will understand how to:

    • Package the smart contract using IBM Blockchain Platform Extension for VS Code
    • Setup a Hyperledger Fabric network on IBM Blockchain Platform 2.0
    • Install and instantiate smart contract package onto the IBM Blockchain Platform 2.0
    • Develop a Node.js server with the Hyperledger Fabric SDK to interact with the deployed network and setup your applications to trigger and catch events

    Architecture flow

    UPDATE

    1. The developer develops a smart contract using Node.js
    2. Use the IBM Blockchain Platform Extension for VS Code to package the Decentralized Energy smart contract.
    3. Setup and launch the IBM Blockchain Platform 2.0 service
    4. The IBM Blockchain Platform 2.0 enables the creation of a network onto a IBM Kubernetes Service, enabling installation and instantiation of the Auction smart contract on the network
    5. The Node.js application uses the Fabric SDK to add a listener to specific transactions and subsequently interact with the deployed network on IBM Blockchain Platform 2.0 and issues transactions.
    6. Events are emitted as transactions are triggered and blocks are committed to the ledger. The events are sent back to the Node.js application.

    Included components

    • IBM Blockchain Platform 2.0 gives you total control of your blockchain network with a user interface that can simplify and accelerate your journey to deploy and manage blockchain components on the IBM Cloud Kubernetes Service.
    • IBM Cloud Kubernetes Service creates a cluster of compute hosts and deploys highly available containers. A Kubernetes cluster lets you securely manage the resources that you need to quickly deploy, update, and scale applications.
    • IBM Blockchain Platform Extension for VS Code is designed to assist users in developing, testing, and deploying smart contracts — including connecting to Hyperledger Fabric environments.

    Featured technologies

    • Hyperledger Fabric v1.4 is a platform for distributed ledger solutions, underpinned by a modular architecture that delivers high degrees of confidentiality, resiliency, flexibility, and scalability.
    • Node.js is an open source, cross-platform JavaScript run-time environment that executes server-side JavaScript code.
    • Hyperledger Fabric SDK for node.js provides a powerful API to interact with a Hyperledger Fabric blockchain.

    Watch the Video

    Prerequisites

    Prerequisites (Local)

    If you want to run this pattern locally, without any Cloud services, then all you need is VSCode and the IBM Blockchain Platform extension.

    Steps (Local Deployment)

    To run a local network, you can find steps here.

    Running the application

    Follow these steps to set up and run this code pattern. The steps are described in detail below.

    Steps

    1. Clone the repo
    2. Package the smart contract
    3. Create IBM Cloud services
    4. Build a network
    5. Deploy Auction Event Smart Contract on the network
    6. Connect application to the network
    7. Run the application

    1. Clone the repo

    Clone this repository in a folder of your choice:

    git clone https://github.com/IBM/auction-events.git
    

    2. Package the smart contract

    We will use the IBM Blockchain Platform extension to package the smart contract.

    • Open Visual Studio code and open the contract folder from auction-events that was cloned earlier.

    • Press the F1 key to see the different VS code options. Choose IBM Blockchain Platform: Package a Smart Contract Project.

    • Click the IBM Blockchain Platform extension button on the left. This will show the packaged contracts on top and the blockchain connections on the bottom. Note You will see auction@0.0.1 instead of globalfinancing@1.0.0.

    • Next, right click on the packaged contract (in this case, select auction@0.0.1) to export it and choose Export Package.

    • Choose a location on your machine and save .cds file. We will use this packages smart contract later to deploy on the IBM Blockchain Platform 2.0 service.

    Now, we will start creating our Hyperledger Fabric network on the IBM Cloud.

    3. Create IBM Cloud services

    • Create the IBM Cloud Kubernetes Service. You can find the service in the Catalog. For this code pattern, we can use the Free cluster, and give it a name. Note, that the IBM Cloud allows one instance of a free cluster and expires after 30 days.




    • After your kubernetes cluster is up and running, you can deploy your IBM Blockchain Platform on the cluster. The service walks through few steps and finds your cluster on the IBM Cloud to deploy the service on.


    • Once the Blockchain Platform is deployed on the Kubernetes cluster, you can launch the console to start operating on your blockchain network.


    4. Build a network

    We will build out the network as provided by the IBM Blockchain Platform documentation. This will include creating a channel with a single peer organization with its own MSP and CA (Certificate Authority), and an orderer organization with its own MSP and CA. We will create the respective identities to deploy peers and operate nodes.

    Create your organization and your entry point to your blockchain

    • Create your peer organization CA

      • Click Add Certificate Authority.
      • Click IBM Cloud under Create Certificate Authority and Next.
      • Give it a Display name of Org1 CA.
      • Specify an Admin ID of admin and Admin Secret of adminpw.


    • Use your CA to register identities

      • Select the Org 1 CA Certificate Authority that we created.
      • First, we will register an admin for our organization “org1”. Click on the Register User button. Give an Enroll ID of org1admin, and Enroll Secret of org1adminpw. Click Next. Set the Type for this identity as client and select from any of the affiliated organizations from the drop-down list. We will leave the Maximum enrollments and Add Attributes fields blank.
      • We will repeat the process to create an identity of the peer. Click on the Register User button. Give an Enroll ID of peer1, and Enroll Secret of peer1pw. Click Next. Set the Type for this identity as peer and select from any of the affiliated organizations from the drop-down list. We will leave the Maximum enrollments and Add Attributes fields blank.


    • Create the peer organization MSP definition

      • Navigate to the Organizations tab in the left navigation and click Create MSP definition.
      • Enter the MSP Display name as Org1 MSP and an MSP ID of org1msp.
      • Under Root Certificate Authority details, specify the peer CA that we created Org1 CA as the root CA for the organization.
      • Give the Enroll ID and Enroll secret for your organization admin, org1admin and org1adminpw. Then, give the Identity name, Org1 Admin.
      • Click the Generate button to enroll this identity as the admin of your organization and export the identity to the wallet. Click Export to export the admin certificates to your file system. Finally click Create MSP definition.


    • Create a peer
      • On the Nodes page, click Add peer.
      • Click IBM Cloud under Create a new peer and Next.
      • Give your peer a Display name of Peer Org1.
      • On the next screen, select Org1 CA as your Certificate Authority. Then, give the Enroll ID and Enroll secret for the peer identity that you created for your peer, peer1, and peer1pw. Then, select the Administrator Certificate (from MSP), Org1 MSP, from the drop-down list and click Next.
      • Give the TLS Enroll ID, admin, and TLS Enroll secret, adminpw, the same values are the Enroll ID and Enroll secret that you gave when creating the CA. Leave the TLS CSR hostname blank.
      • The last side panel will ask you to Associate an identity and make it the admin of your peer. Select your peer admin identity Org1 Admin.
      • Review the summary and click Add Peer.


    Create the node that orders transactions

    • Create your orderer organization CA

      • Click Add Certificate Authority.
      • Click IBM Cloud under Create Certificate Authority and Next.
      • Give it a unique Display name of Orderer CA.
      • Specify an Admin ID of admin and Admin Secret of adminpw.


    • Use your CA to register orderer and orderer admin identities

      • In the Nodes tab, select the Orderer CA Certificate Authority that we created.
      • First, we will register an admin for our organization. Click on the Register User button. Give an Enroll ID of ordereradmin, and Enroll Secret of ordereradminpw. Click Next. Set the Type for this identity as client and select from any of the affiliated organizations from the drop-down list. We will leave the Maximum enrollments and Add Attributes fields blank.
      • We will repeat the process to create an identity of the orderer. Click on the Register User button. Give an Enroll ID of orderer1, and Enroll Secret of orderer1pw. Click Next. Set the Type for this identity as peer and select from any of the affiliated organizations from the drop-down list. We will leave the Maximum enrollments and Add Attributes fields blank.


    • Create the orderer organization MSP definition

      • Navigate to the Organizations tab in the left navigation and click Create MSP definition.
      • Enter the MSP Display name as Orderer MSP and an MSP ID of orderermsp.
      • Under Root Certificate Authority details, specify the peer CA that we created Orderer CA as the root CA for the organization.
      • Give the Enroll ID and Enroll secret for your organization admin, ordereradmin and ordereradminpw. Then, give the Identity name, Orderer Admin.
      • Click the Generate button to enroll this identity as the admin of your organization and export the identity to the wallet. Click Export to export the admin certificates to your file system. Finally click Create MSP definition.


    • Create an orderer

      • On the Nodes page, click Add orderer.
      • Click IBM Cloud and proceed with Next.
      • Give your peer a Display name of Orderer.
      • On the next screen, select Orderer CA as your Certificate Authority. Then, give the Enroll ID and Enroll secret for the peer identity that you created for your orderer, orderer1, and orderer1pw. Then, select the Administrator Certificate (from MSP), Orderer MSP, from the drop-down list and click Next.
      • Give the TLS Enroll ID, admin, and TLS Enroll secret, adminpw, the same values are the Enroll ID and Enroll secret that you gave when creating the CA. Leave the TLS CSR hostname blank.
      • The last side panel will ask to Associate an identity and make it the admin of your peer. Select your peer admin identity Orderer Admin.
      • Review the summary and click Add Orderer.


    • Add organization as Consortium Member on the orderer to transact

      • Navigate to the Nodes tab, and click on the Orderer that we created.
      • Under Consortium Members, click Add organization.
      • From the drop-down list, select Org1 MSP, as this is the MSP that represents the peer’s organization org1.
      • Click Submit.


    Create and join channel

    • Create the channel

      • Navigate to the Channels tab in the left navigation.
      • Click Create channel.
      • Give the channel a name, mychannel.
      • Select the orderer you created, Orderer from the orderers drop-down list.
      • Select the MSP identifying the organization of the channel creator from the drop-down list. This should be Org1 MSP (org1msp).
      • Associate available identity as Org1 Admin.
      • Click Add next to your organization. Make your organization an Operator.
      • Click Create.


    • Join your peer to the channel

      • Click Join channel to launch the side panels.
      • Select your Orderer and click Next.
      • Enter the name of the channel you just created. mychannel and click Next.
      • Select which peers you want to join the channel, click Peer Org1 .
      • Click Submit.


    5. Deploy the Auction Event Smart Contract on the network

    • Install a smart contract (note: substitute the word auction where ever you see the word fabcar in the graphics)

      • Click the Smart contracts tab to install the smart contract.
      • Click Install smart contract to upload the Auction smart contract package file, which you packaged earlier using the Visual Studio code extension.
      • Click on Add file and find your packaged smart contract.
      • Once the contract is uploaded, click Install.


    • Instantiate smart contract (note: substitute the word auction where ever you see the word fabcar in the graphics)

      • On the smart contracts tab, find the smart contract from the list installed on your peers and click Instantiate from the overflow menu on the right side of the row.
      • On the side panel that opens, select the channel, mychannel to instantiate the smart contract on. Click Next.
      • Select the organization members to be included in the policy, org1msp. Click Next.
      • Give Function name of instantiate and leave Arguments blank. Note: instantiate is the method in the my-contract.js file that initiates the smart contracts on the peer. Some may name this initLedger.
      • Click Instantiate.


    6. Connect application to the network

    • Connect with sdk through connection profile (note: substitute the word auction where ever you see the word fabcar in the graphics)

      • Under the Instantiated Smart Contract, click on Connect with SDK from the overflow menu on the right side of the row.
      • Choose from the dropdown for MSP for connection, org1msp.
      • Choose from Certificate Authority dropdown, Org1 CA.
      • Download the connection profile by scrolling down and clicking Download Connection Profile. This will download the connection json which we will use soon to establish connection.
      • You can click Close once the download completes.


    • Create an application admin

      • Go to the Nodes tab on the left bar, and under Certificate Authorities, choose your organization CA, Org1 CA.
      • Click on Register user.
      • Give an Enroll ID and Enroll Secret to administer your application users, app-admin and app-adminpw.
      • Choose client as Type and any organization for affiliation. We can pick org1 to be consistent.
      • You can leave the Maximum enrollments blank.
      • Under Attributes, click on Add attribute. Give attribute as hf.Registrar.Roles = *. This will allow this identity to act as registrar and issues identities for our app. Click Add-attribute.
      • Click Register.


    • Update application connection

      • Copy the connection profile you downloaded into application folder
      • Update the config.json file with:
      • The connection json file name you downloaded.
      • The enroll id and enroll secret for your app admin, which we earlier provided as app-admin and app-adminpw.
      • The orgMSP ID, which we provided as org1msp.
      • The caName, which can be found in your connection json file under “organization” -> “org1msp” -> certificateAuthorities”. This would be like an IP address and a port.
      • The peer, , which can be found in your connection json file under “organization” -> “org1msp” -> peers”. This would be like an IP address and a port.
      • The username you would like to register.
      • Update gateway discovery to { enabled: true, asLocalhost: false } to connect to IBP.
     {
       "channel_name": "mychannel",
        "smart_contract_name": "auction",
        "connection_file": "mychannel_auction_profile.json",
        "appAdmin": "app-admin",
        "appAdminSecret": "app-adminpw",
        "orgMSPID": "org1msp",
        "caName": "173.193.79.114:32615",
        "peer": "grpcs://173.193.79.114:30324",
        "orderer": "grpcs://173.193.79.114:32018",
        "userName": "user1",
        "gatewayDiscovery": { "enabled": true, "asLocalhost": false }
     }

    7. Run the application

    • Enroll admin

      • First, navigate to the application directory, and install the node dependencies.

        cd application
        npm install
      • Run the enrollAdmin.js script

        node enrollAdmin.js

    This will create a directory called wallet and insert the user Admin along with its certificate authority.

    • You should see the following in the terminal:

      msg: Successfully enrolled admin user app-admin and imported it into the wallet
    • In the newest version of the Hyperledger Fabric Node SDK (1.4 release) there are three main event types that can be subscribed to

      1. Contract events – these have to be emitted from the chaincode by calling the stub.setEvent(name,payload) method. An example can be seen in the auction chaincode on line 141 of contract/lib/auction.js. These types of events are great, since you can customize exactly what data you want to send to the client application. Note that these events will only be triggered once a certain function within your chaincode is called.
      2. Transaction (Commit) events – these are automatically emitted after a transaction is committed to the ledger.
      3. Block events – these are emitted automatically when a block is committed. Note that there can be mutliple transactions in a block, so you may get multiple transaction events for one block event.

    8. Emit Contract Events

    • To illustrate each of these three main event types, we will have a separate script for each, that will show each of the events in action. First, let’s check out the contractEvents.js file. This file uses the addContractListener function to look for any TradeEvent events that may be published from our chaincode. You can see in our contract directory, that our StartBidding, Offer, and CloseBidding functions all emit an event my calling await ctx.stub.setEvent('TradeEvent', Buffer.from(JSON.stringify(tradeEvent))); Then, our callback function in our contractEvents.js file will fire once it has detected that the TradeEvent is sent. Go ahead and run contractEvents.js by typing in the following command in terminal
      application$ node contractEvents.js 
      Wallet path: /Users/Horea.Porutiu@ibm.com/Workdir/testDir/auction-events/application/wallet
      gateway connect
      ************************ Start Trade Event *******************************************************
      type: Start Auction
      ownerId: auction@acme.org
      id: l1
      description: Sample Product
      status: {"code":1,"text":"FOR_SALE"}
      amount: 50
      buyerId: auction@acme.org
      Block Number: 124 Transaction ID: 
      6be255d6c2ab968ab9f0bd4bbc3477f51f1e02512d11e86fc509f2f6f0e51a7e Status: VALID
      ************************ End Trade Event ************************************
      closebiddingResponse: 
      {"listingId":"l1","offers":[{"bidPrice":100,"memberId":"memberB@acme.org"},{"bidPrice":50,"memberId":"memberA@acme.org"}],"productId":"p1","reservePrice":50,"state":"{\"code\":3,\"text\":\"SOLD\"}"}
      Transaction to close the bidding has been submitted

    This above output parses the trade event – it shows us the type of the event, the owner, the id, the description of the product the status, etc. This is all things we have built and emitted within our chaincode. Great. Now that we understand how contract events work, let’s move onto the block event listener.

    9. Emit Block Events

    • Block events are different than contract events since you have less control of what exactly is being output. Go ahead and check out blockEvents.js. Note that there may be multiple transactions within one block, and you can edit how many transactions are in your block by editing the block batch size for a channel. You can read more details about this here. The main components of the block are the block header, the block data, and the block metadata.

      • The block header contains the block number, (starting at 0 from the genesis block) and increased by 1 for every new block appended to the blockchain. It also has the current block hash (the hash of all transactions in the current block), and the previous block hash.
      • The block data contains a list of the transactions in order.
      • The block metadata contains the time when the block was written, the certificate, public key and signature of the block writer.
    • Go ahead and run the blockEvents.js script by typing in the following commands in the terminal. For each contract.submitTransaction we submit, we will have a new block added to the ledger. Notice the output will be divided by header, data, and metadata. You can then parse those respective parts of the output to learn more about each specific part of the block.

    application$ node blockEvents.js 
    Wallet path: /Users/Horea.Porutiu@ibm.com/Workdir/testDir/auction-events/application/wallet
    gateway connect
    *************** start block header **********************
    { number: '396',
      previous_hash: 'af979a1632e1ba69a75256dce4bafad40e93ebec6ee17de5b2923bbeb5abfec8',
      data_hash: '4db396d91151c432e1f17f32254565bc2445975d6d8c9000ff74a5c2a845dd26' }
    *************** end block header **********************
    *************** start block data **********************
    { signature: <Buffer 30 44 02 20 14 31 37 8d be 63 99 69 cc b7 35 30 b7 71 d8 0f 38 98 70 c6 7a cb fa a6 ed c3 a8 eb 28 c1 90 9f 02 20 05 4a 5d 66 61 4a 4f e9 42 37 11 b1 ... >,
      payload: 
      { header: 
          { channel_header: 
            { type: 3,
              version: 1,
              timestamp: '2019-08-30T00:10:45.075Z',
              channel_id: 'mychannel',
              tx_id: 'cb12f4a9209c0c35d20213c4d2c517c2b199761cb29902a11bc955eba291acc6',
              epoch: '0',
              extension: <Buffer 12 09 12 07 61 75 63 74 69 6f 6e>,
              typeString: 'ENDORSER_TRANSACTION' },
            signature_header: 
            { creator: 
                { Mspid: 'org1msp',
                  IdBytes: '-----BEGIN CERTIFICATE-----\nMIICaTCCAhCgAwIBAgIUC2iFJ+dVTbE8QSVoqjjno3mT9sowCgYIKoZIzj0EAwIw\naDELMAkGA1UEBhMCVVMxFzAVBgNVBAgTDk5vcnRoIENhcm9saW5hMRQwEgYDVQQK\nEwtIeXBlcmxlZGdlcjEPMA0GA1UECxMGRmFicmljMRkwFwYDVQQDExBmYWJyaWMt\nY2Etc2VydmVyMB4XDTE5MDgyODE4MjQwMFoXDTIwMDgyNzE4MjkwMFowJjEPMA0G\nA1UECxMGY2xpZW50MRMwEQYDVQQDEwphcHAtYWRtaW4yMFkwEwYHKoZIzj0CAQYI\nKoZIzj0DAQcDQgAEMFDxKrg+VEO3mK5tfJKf7oULfagOMcAmX4T4NUmLI/ojsnTe\naTJUeJQQ3Vyp1L7pV3hZGvY9HlZUt6uVoLjju6OB2TCB1jAOBgNVHQ8BAf8EBAMC\nB4AwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQU/GYKt2Q4Bk7aKon90K66rlgwWlAw\nHwYDVR0jBBgwFoAUUii9qu9Xs+cjOsm1MaM+xK1UCL4wdgYIKgMEBQYHCAEEansi\nYXR0cnMiOnsiaGYuQWZmaWxpYXRpb24iOiIiLCJoZi5FbnJvbGxtZW50SUQiOiJh\ncHAtYWRtaW4yIiwiaGYuUmVnaXN0cmFyLlJvbGVzIjoiKiIsImhmLlR5cGUiOiJj\nbGllbnQifX0wCgYIKoZIzj0EAwIDRwAwRAIgQEaWB8YVEfO67OAWLypnQX//0nrg\nOGtLqVv/HRkg2TsCIC4cn04cmTjLWg/GVGuXSLlaZV1SZFlGvd9lNDN2ytea\n-----END CERTIFICATE-----\n' },
              nonce: <Buffer fa 18 68 a7 a9 8b 74 10 8e 41 75 22 0d 74 8a 60 86 15 1f e3 13 59 d3 06> } },
        data: 
          { actions: 
            [ { header: 
                  { creator: [Object],
                    nonce: <Buffer fa 18 68 a7 a9 8b 74 10 8e 41 75 22 0d 74 8a 60 86 15 1f e3 13 59 d3 06> },
                payload: { chaincode_proposal_payload: [Object], action: [Object] } } ] } } }
    *************** end block data **********************
    *************** start block metadata ****************
    { metadata: 
      [ { value: '\n\u0000\u0012\n\n\b\n\u0001\u0001\u0010\u0002\u0018�\u0003',
          signatures: 
            [ { signature_header: 
                { creator: [Object],
                  nonce: <Buffer 0b f0 46 f5 17 1e 74 f3 c4 34 75 aa b0 85 80 f6 4e 4c d0 f5 ba a0 26 b4> },
                signature: <Buffer 30 45 02 21 00 b7 12 42 a3 43 21 8b a3 ea e1 56 af e4 63 2c 7e cc c6 c3 bf b7 e5 01 73 69 27 ce 4b 13 63 d3 5b 02 20 1d a1 e8 38 f6 fb 48 bb 3c 9a 01 ... > } ] },
        { value: { index: '0' }, signatures: [] },
        [ 0 ] ] }
    *************** end block metadata ****************

    To learn more about the specifics of what is included inside of a block, read this page.

    10. Emit Transaction Events

    • Lastly, let’s go ahead and listen for transaction events. This is even more granular than block events, since multiple transactions can comprise a block. We will use the transaction.addCommitListener to listen to transactions. Go ahead and look at the transactionEvents.js file. What we are doing, is that we are adding a committ listener, such that when we submit a transaction, and it is committed, we will get the transactionId, status, and blockheight back. Go ahead and run transactionEvents.js file, and you should see the following output in your terminal:
    application$ node transactionEvents.js 
    Wallet path: /Users/Horea.Porutiu@ibm.com/Workdir/testDir/auction-events/application/wallet
    gateway connect
    transaction committed
    'ef7f833d6039e41c5054d0bba0d327cfc14bfd7be836a6c5e65547320880d1af'
    'VALID'
    405
    transaction committed end
    Transaction to add seller has been submitted
    application$ 
    • Nice job! You’ve now learned how to

    Troubleshooting

    • If you receive the following error on submitting transaction: error: [Client.js]: Channel not found for name mychannel

      It is safe to ignore this error because the ibp2.0 beta has service discovery enabled. (In order to use service discovery to find other peers please define anchor peers for your channel in the ui). If you really want the message to go away you can add the channels section to the connection profile, but it is a warning rather than a true error telling the user the channel is found but not in the connection profile

      As an example you can manually add the following json and updated the IP address and ports manually:

      "channels": {
              "mychannel": {
                  "orderers": [
                      "169.46.208.151:32078"
                  ],
                  "peers": {
                      "169.46.208.151:31017": {}
                  }
              }
          },
      
    • In the invoke-emit.js application, you will see the following code: It is important to note that in order for the getClient method to actually get the connection.profile content, you need to have line #4 occur before line #6. If you don’t, then the client constant will be null. It is important you have the order correct to run the code successfully.

    1. // A gateway defines the peers used to access Fabric networks
        
    2.    await gateway.connect(ccp, { wallet, identity: appAdmin , discovery: {enabled: true, asLocalhost:false }});
    3.    console.log('Connected to Fabric gateway.');
    
    4.   const network = await gateway.getNetwork(channelName);
    5.    // Get addressability to network
    
    6.    const client = gateway.getClient();
        
    7.    const channel = client.getChannel('mychannel');
    8.    console.log('Got addressability to channel');
        
    9.    const channel_event_hub = channel.getChannelEventHub('173.193.79.114:30324');

    Extending the code pattern

    This application can be expanded in a couple of ways:

    • Create a wallet for every member and use the member’s wallet to interact with the application.
    • Add a UI application in place of the invoke.js node application to execute the transactions.

    Links

    License

    This code pattern is licensed under the Apache Software License, Version 2. Separate third-party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 (DCO) and the Apache Software License, Version 2.

    Apache Software License (ASL) FAQ

    Visit original content creator repository https://github.com/IBM/auction-events
  • textio

    textio CircleCI Go Report Card GoDoc

    Note
    Segment has paused maintenance on this project, but may return it to an active status in the future. Issues and pull requests from external contributors are not being considered, although internal contributions may appear from time to time. The project remains available under its open source license for anyone to use.

    Go package providing tools for advanced text manipulations

    Motivation

    This package aims to provide a suite of tools to deal with text parsing and formatting. It is intended to extend what the standard library already offers, and make it easy to integrate with it.

    Examples

    This sections presents a couple of examples about how to use this package.

    Indenting

    Indentation is often a complex problem to solve when dealing with stream of text that may be composed of multiple lines. To address this problem, this package provides the textio.PrefixWriter type, which implements the io.Writer interface and automatically prepends every line of output with a predefined prefix.

    Here is an example:

    func copyIndent(w io.Writer, r io.Reader) error {
        p := textio.NewPrefixWriter(w, "\t")
    
        // Copy data from an input stream into the PrefixWriter, all lines will
        // be prefixed with a '\t' character.
        if _, err := io.Copy(p, r); err != nil {
            return err
        }
    
        // Flushes any data buffered in the PrefixWriter, this is important in
        // case the last line was not terminated by a '\n' character.
        return p.Flush()
    }

    Tree Formatting

    A common way to represent tree-like structures is the formatting used by the tree(1) unix command. The textio.TreeWriter type is an implementation of an io.Writer which supports this kind of output. It works in a recursive fashion where nodes created from a parent tree writer are formatted as part of that tree structure.

    Here is an example:

    func ls(w io.Writer, path string) {
    	tree := NewTreeWriter(w)
    	tree.WriteString(filepath.Base(path))
    	defer tree.Close()
    
    	files, _ := ioutil.ReadDir(path)
    
    	for _, f := range files {
    		if f.Mode().IsDir() {
    			ls(tree, filepath.Join(path, f.Name()))
    		}
    	}
    
    	for _, f := range files {
    		if !f.Mode().IsDir() {
    			io.WriteString(NewTreeWriter(tree), f.Name())
    		}
    	}
    }
    
    ...
    
    ls(os.Stdout, "examples")

    Which gives this output:

    examples
    ├── A
    │   ├── 1
    │   └── 2
    └── message
    
    Visit original content creator repository https://github.com/segmentio/textio
  • rsshub-never-die

    rsshub-never-die

    Version Docker Pulls GitHub Workflow Status Documentation Maintenance License: MIT

    一个基于 hono 的 RSSHub 代理服务。支持自动负载均衡、自动容错和反向代理 RSSHub 实例,支持 Node.js/Docker/Vercel/Cloudflare Workers 等方式部署。

    项目名称来自《Legends Never Die》

    🏠 主页

    https://github.com/CaoMeiYouRen/rsshub-never-die#readme

    📦 依赖要求

    • node >=18

    🚀 部署

    你可以在这里找到更多公共实例

    Cloudflare Workers 部署

    一键部署

    点击下方按钮一键部署到 Cloudflare Workers

    Deploy to Cloudflare Workers

    手动部署

    1. 修改 wrangler.toml 配置文件。
    name = "rsshub-never-die"
    main = "dist/app.mjs"
    minify = true
    compatibility_date = "2024-10-20"
    compatibility_flags = ["nodejs_compat"]
    
    [vars]
    # 超时时间(ms)
    TIMEOUT = 60000
    # 最大请求体大小(字节),默认 100MB
    MAX_BODY_SIZE = 104857600
    # 缓存时间(秒)
    CACHE_MAX_AGE = 300
    # RSSHub 实例 的 URL 地址,使用英文逗号分隔。
    # 官方实例 https://rsshub.app 不用列出,默认添加。
    RSSHUB_NODE_URLS = 'https://rsshub.rssforever.com, https://hub.slarker.me, https://rsshub.pseudoyu.com, https://rsshub.ktachibana.party, https://rsshub.woodland.cafe, https://rss.owo.nz, https://yangzhi.app, https://rsshub.henry.wang, https://rss.peachyjoy.top, https://rsshub.speednet.icu'
    # 最大实例节点数,默认为 6
    MAX_NODE_NUM=6
    # 访问码,注意和 RSSHub 的 ACCESS_KEY 不是同一个。
    # 留空则不做限制
    # 启用后,在url中添加 authKey 参数即可,例如 authKey=yyyy
    AUTH_KEY=''
    # 运行模式,有三种模式,负载均衡、自动容灾、快速响应模式
    # 默认为负载均衡模式
    # 可选值:loadbalance, failover, quickresponse
    MODE = 'loadbalance'
    
    1. 构建并部署到 Cloudflare Workers
    npm run build && npm run deploy:wrangler

    Vercel 部署

    点击下方按钮一键部署到 Vercel。

    Deploy with Vercel

    Docker 镜像

    支持两种注册表:

    支持以下架构:

    • linux/amd64
    • linux/arm64

    有以下几种 tags:

    Tag 描述 举例
    latest 最新 latest
    {YYYY-MM-DD} 特定日期 2024-06-07
    {sha-hash} 特定提交 sha-0891338
    {version} 特定版本 1.2.3

    Docker Compose 部署

    下载 docker-compose.yml

    wget https://raw.githubusercontent.com/CaoMeiYouRen/rsshub-never-die/refs/heads/master/docker-compose.yml

    检查有无需要修改的配置

    vim docker-compose.yml  # 也可以是你喜欢的编辑器

    docker-compose.yml 文件中修改 RSSHUB_NODE_URLS 字段即可修改 RSSHub 实例地址。

    启动

    docker-compose up -d

    在浏览器中打开 http://{Server IP}:3000 即可查看结果

    Node.js 部署

    确保本地已安装 Node.js 和 pnpm。

    # 下载源码
    git clone https://github.com/CaoMeiYouRen/rsshub-never-die.git  --depth=1
    cd rsshub-never-die
    # 安装依赖
    pnpm i --frozen-lockfile
    # 构建项目
    pnpm build
    # 启动项目
    pnpm start

    在浏览器中打开 http://{Server IP}:3000 即可查看结果。

    .env 文件中修改 RSSHUB_NODE_URLS 字段即可修改 RSSHub 实例地址。

    👨‍💻 使用

    直接将原本的 rsshub.app 域名替换为部署的域名即可。

    例如:

    如果基础路径为 https://example.vercel.app,则原本

    https://rsshub.app/github/activity/CaoMeiYouRen

    路由的地址就是

    https://example.vercel.app/github/activity/CaoMeiYouRen

    配置项

    # 运行端口
    PORT=3000
    
    # 超时时间(ms)
    # 如果在 vercel 中运行,则还要修改 vercel.json 中的 maxDuration 字段(单位:秒)
    TIMEOUT=60000
    
    NODEJS_HELPERS=0
    # 是否写入日志到文件
    LOGFILES=false
    
    # 日志级别
    # LOG_LEVEL=http
    
    # 最大请求体大小(字节),默认 100MB
    # MAX_BODY_SIZE=104857600
    
    # RSSHub 实例 的 URL 地址,使用英文逗号分隔。
    # 官方实例 https://rsshub.app 不用列出,默认添加。
    RSSHUB_NODE_URLS='https://rsshub.rssforever.com, https://hub.slarker.me, https://rsshub.pseudoyu.com, https://rsshub.ktachibana.party, https://rsshub.woodland.cafe, https://rss.owo.nz, https://yangzhi.app, https://rsshub.henry.wang, https://rss.peachyjoy.top, https://rsshub.speednet.icu'
    
    # 最大实例节点数,默认为 6
    # Cloudflare Workers 平台限制 fetch 一次最多并发 6 个,总计 50 个子请求。所以快速响应模式下最多 6 个节点,其他模式最多 50 个节点。
    # 其他平台没有限制,以实际情况为准。
    MAX_NODE_NUM=6
    
    # 缓存时间(秒)
    CACHE_MAX_AGE=300
    
    # 访问码,注意和 RSSHub 的 ACCESS_KEY 不是同一个。
    # 留空则不做限制
    # 启用后,在url中添加 authKey 参数即可,例如 authKey=yyyy
    AUTH_KEY=''
    
    # 运行模式,有三种模式,负载均衡、自动容灾、快速响应模式
    # 负载均衡:负载均衡模式下,会随机选择一个 RSSHub 实例进行请求。但不管请求成功还是失败,都会返回给客户端。
    # 自动容灾:自动容灾模式下,会随机选择一个 RSSHub 实例进行请求。如果请求成功,则返回给客户端。如果请求失败,则会选择下一个实例进行请求。如果所有实例都失败,则返回给客户端错误。
    # 在自动容灾模式下,由于重新请求需要时间,会增加整体的请求时间。
    # 快速响应:会随机选择多个 RSSHub 实例进行请求。并返回最快的成功响应。如果全部失败,则则返回给客户端错误。
    # 快速响应模式下,会增加背后实例的负载。
    # 默认为负载均衡模式
    # 可选值:loadbalance, failover, quickresponse
    MODE = 'loadbalance'

    📚FAQ

    1. 在什么情况下应该使用本项目?

    适用情况:

    • 负载均衡: 有多个 RSSHub 实例节点,需要将请求随机转发到某个实例时。使用 负载均衡 模式即可实现该需求。
    • 自动容灾: 有多个 RSSHub 实例节点,希望在某个节点失效时自动将请求转发到下一个实例。使用 自动容灾 模式即可实现该需求。
    • 快速响应: 有多个 RSSHub 实例节点,希望并发请求,并返回最快成功的那个。使用 快速响应 模式即可实现该需求。在 快速响应 模式下,本项目会从提供的 RSSHub 实例节点中随机挑选 5 个,加上官方实例,共计 6 个节点,并发请求,并返回最快 成功 的那个响应。即便有部分实例失效,也可以从其他正常的实例中返回最快的结果。
    • 不需要配置项的路由: 对于所有不需要配置项的路由,均可以正常访问。
    • 反向代理: 由于一些原因,你可能无法访问部分 RSSHub 实例,通过本项目作为代理,你可以正常访问到有效的 RSSHub 实例(需要准备一个有效的域名)。

    2. 在什么情况下本项目不适用?

    不适用情况:

    • 需要配置项的路由: 对于部分需要配置项才能正常工作的路由,由于公共实例并未提供相关配置,故这些路由均无法正常工作。

    🛠️ 开发

    pnpm run dev

    🔧 编译

    pnpm run build

    🔍 Lint

    pnpm run lint

    💾 Commit

    pnpm run commit

    👤 作者

    CaoMeiYouRen

    🤝 贡献

    欢迎 贡献、提问或提出新功能!
    如有问题请查看 issues page.
    贡献或提出新功能可以查看contributing guide.

    💰 支持

    如果觉得这个项目有用的话请给一颗⭐️,非常感谢

    🌟 Star History

    Star History Chart

    📝 License

    Copyright © 2024 CaoMeiYouRen.
    This project is MIT licensed.


    This README was generated with ❤️ by cmyr-template-cli

    Visit original content creator repository https://github.com/CaoMeiYouRen/rsshub-never-die
  • rsshub-never-die

    rsshub-never-die

    Version Docker Pulls GitHub Workflow Status Documentation Maintenance License: MIT

    一个基于 hono 的 RSSHub 代理服务。支持自动负载均衡、自动容错和反向代理 RSSHub 实例,支持 Node.js/Docker/Vercel/Cloudflare Workers 等方式部署。

    项目名称来自《Legends Never Die》

    🏠 主页

    https://github.com/CaoMeiYouRen/rsshub-never-die#readme

    📦 依赖要求

    • node >=18

    🚀 部署

    你可以在这里找到更多公共实例

    Cloudflare Workers 部署

    一键部署

    点击下方按钮一键部署到 Cloudflare Workers

    Deploy to Cloudflare Workers

    手动部署

    1. 修改 wrangler.toml 配置文件。
    name = "rsshub-never-die"
    main = "dist/app.mjs"
    minify = true
    compatibility_date = "2024-10-20"
    compatibility_flags = ["nodejs_compat"]
    
    [vars]
    # 超时时间(ms)
    TIMEOUT = 60000
    # 最大请求体大小(字节),默认 100MB
    MAX_BODY_SIZE = 104857600
    # 缓存时间(秒)
    CACHE_MAX_AGE = 300
    # RSSHub 实例 的 URL 地址,使用英文逗号分隔。
    # 官方实例 https://rsshub.app 不用列出,默认添加。
    RSSHUB_NODE_URLS = 'https://rsshub.rssforever.com, https://hub.slarker.me, https://rsshub.pseudoyu.com, https://rsshub.ktachibana.party, https://rsshub.woodland.cafe, https://rss.owo.nz, https://yangzhi.app, https://rsshub.henry.wang, https://rss.peachyjoy.top, https://rsshub.speednet.icu'
    # 最大实例节点数,默认为 6
    MAX_NODE_NUM=6
    # 访问码,注意和 RSSHub 的 ACCESS_KEY 不是同一个。
    # 留空则不做限制
    # 启用后,在url中添加 authKey 参数即可,例如 authKey=yyyy
    AUTH_KEY=''
    # 运行模式,有三种模式,负载均衡、自动容灾、快速响应模式
    # 默认为负载均衡模式
    # 可选值:loadbalance, failover, quickresponse
    MODE = 'loadbalance'
    
    1. 构建并部署到 Cloudflare Workers
    npm run build && npm run deploy:wrangler

    Vercel 部署

    点击下方按钮一键部署到 Vercel。

    Deploy with Vercel

    Docker 镜像

    支持两种注册表:

    支持以下架构:

    • linux/amd64
    • linux/arm64

    有以下几种 tags:

    Tag 描述 举例
    latest 最新 latest
    {YYYY-MM-DD} 特定日期 2024-06-07
    {sha-hash} 特定提交 sha-0891338
    {version} 特定版本 1.2.3

    Docker Compose 部署

    下载 docker-compose.yml

    wget https://raw.githubusercontent.com/CaoMeiYouRen/rsshub-never-die/refs/heads/master/docker-compose.yml

    检查有无需要修改的配置

    vim docker-compose.yml  # 也可以是你喜欢的编辑器

    docker-compose.yml 文件中修改 RSSHUB_NODE_URLS 字段即可修改 RSSHub 实例地址。

    启动

    docker-compose up -d

    在浏览器中打开 http://{Server IP}:3000 即可查看结果

    Node.js 部署

    确保本地已安装 Node.js 和 pnpm。

    # 下载源码
    git clone https://github.com/CaoMeiYouRen/rsshub-never-die.git  --depth=1
    cd rsshub-never-die
    # 安装依赖
    pnpm i --frozen-lockfile
    # 构建项目
    pnpm build
    # 启动项目
    pnpm start

    在浏览器中打开 http://{Server IP}:3000 即可查看结果。

    .env 文件中修改 RSSHUB_NODE_URLS 字段即可修改 RSSHub 实例地址。

    👨‍💻 使用

    直接将原本的 rsshub.app 域名替换为部署的域名即可。

    例如:

    如果基础路径为 https://example.vercel.app,则原本

    https://rsshub.app/github/activity/CaoMeiYouRen

    路由的地址就是

    https://example.vercel.app/github/activity/CaoMeiYouRen

    配置项

    # 运行端口
    PORT=3000
    
    # 超时时间(ms)
    # 如果在 vercel 中运行,则还要修改 vercel.json 中的 maxDuration 字段(单位:秒)
    TIMEOUT=60000
    
    NODEJS_HELPERS=0
    # 是否写入日志到文件
    LOGFILES=false
    
    # 日志级别
    # LOG_LEVEL=http
    
    # 最大请求体大小(字节),默认 100MB
    # MAX_BODY_SIZE=104857600
    
    # RSSHub 实例 的 URL 地址,使用英文逗号分隔。
    # 官方实例 https://rsshub.app 不用列出,默认添加。
    RSSHUB_NODE_URLS='https://rsshub.rssforever.com, https://hub.slarker.me, https://rsshub.pseudoyu.com, https://rsshub.ktachibana.party, https://rsshub.woodland.cafe, https://rss.owo.nz, https://yangzhi.app, https://rsshub.henry.wang, https://rss.peachyjoy.top, https://rsshub.speednet.icu'
    
    # 最大实例节点数,默认为 6
    # Cloudflare Workers 平台限制 fetch 一次最多并发 6 个,总计 50 个子请求。所以快速响应模式下最多 6 个节点,其他模式最多 50 个节点。
    # 其他平台没有限制,以实际情况为准。
    MAX_NODE_NUM=6
    
    # 缓存时间(秒)
    CACHE_MAX_AGE=300
    
    # 访问码,注意和 RSSHub 的 ACCESS_KEY 不是同一个。
    # 留空则不做限制
    # 启用后,在url中添加 authKey 参数即可,例如 authKey=yyyy
    AUTH_KEY=''
    
    # 运行模式,有三种模式,负载均衡、自动容灾、快速响应模式
    # 负载均衡:负载均衡模式下,会随机选择一个 RSSHub 实例进行请求。但不管请求成功还是失败,都会返回给客户端。
    # 自动容灾:自动容灾模式下,会随机选择一个 RSSHub 实例进行请求。如果请求成功,则返回给客户端。如果请求失败,则会选择下一个实例进行请求。如果所有实例都失败,则返回给客户端错误。
    # 在自动容灾模式下,由于重新请求需要时间,会增加整体的请求时间。
    # 快速响应:会随机选择多个 RSSHub 实例进行请求。并返回最快的成功响应。如果全部失败,则则返回给客户端错误。
    # 快速响应模式下,会增加背后实例的负载。
    # 默认为负载均衡模式
    # 可选值:loadbalance, failover, quickresponse
    MODE = 'loadbalance'

    📚FAQ

    1. 在什么情况下应该使用本项目?

    适用情况:

    • 负载均衡: 有多个 RSSHub 实例节点,需要将请求随机转发到某个实例时。使用 负载均衡 模式即可实现该需求。
    • 自动容灾: 有多个 RSSHub 实例节点,希望在某个节点失效时自动将请求转发到下一个实例。使用 自动容灾 模式即可实现该需求。
    • 快速响应: 有多个 RSSHub 实例节点,希望并发请求,并返回最快成功的那个。使用 快速响应 模式即可实现该需求。在 快速响应 模式下,本项目会从提供的 RSSHub 实例节点中随机挑选 5 个,加上官方实例,共计 6 个节点,并发请求,并返回最快 成功 的那个响应。即便有部分实例失效,也可以从其他正常的实例中返回最快的结果。
    • 不需要配置项的路由: 对于所有不需要配置项的路由,均可以正常访问。
    • 反向代理: 由于一些原因,你可能无法访问部分 RSSHub 实例,通过本项目作为代理,你可以正常访问到有效的 RSSHub 实例(需要准备一个有效的域名)。

    2. 在什么情况下本项目不适用?

    不适用情况:

    • 需要配置项的路由: 对于部分需要配置项才能正常工作的路由,由于公共实例并未提供相关配置,故这些路由均无法正常工作。

    🛠️ 开发

    pnpm run dev

    🔧 编译

    pnpm run build

    🔍 Lint

    pnpm run lint

    💾 Commit

    pnpm run commit

    👤 作者

    CaoMeiYouRen

    🤝 贡献

    欢迎 贡献、提问或提出新功能!
    如有问题请查看 issues page.
    贡献或提出新功能可以查看contributing guide.

    💰 支持

    如果觉得这个项目有用的话请给一颗⭐️,非常感谢

    🌟 Star History

    Star History Chart

    📝 License

    Copyright © 2024 CaoMeiYouRen.
    This project is MIT licensed.


    This README was generated with ❤️ by cmyr-template-cli

    Visit original content creator repository https://github.com/CaoMeiYouRen/rsshub-never-die
  • GGJ-2021

    I May Have Lost All Control But At Least I Found Your Bag

    Welcome to your new summer job working at the Lost-And-Found kiosk at the Hotel Leaf, Moon, and Precipitation!
    Guests will occasionally turn up and ask you to find their things.
    All you need to do is retrieve their things from the cupboard behind you at the kiosk.

    If something really wacky and or uncharacteristic happens, leading to you getting lost, never fear.
    Just use your spacebar, and follow the noise made by the kiosk.
    And if you notice your controls getting lost, well, just keep an ear out for them.

    About this game

    This game was made for the 2021 Global Game Jam.
    https://globalgamejam.org/2021/games/i-may-have-lost-all-control-least-i-found-your-bag-4

    Source code is available at https://github.com/11BelowStudio/GGJ-2021

    Attributions

    Design, programming, textures, and kazoo noises by Rachel Lowe

    Textures made with Paint.NET
    Kazoo noises recorded and exported with Audacity
    Programmed using Jetbrains Rider
    Implemented using Unity 2019.4.18f1

    “Gymnopedie No. 1” Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0 License
    http://creativecommons.org/licenses/by/4.0/

    “Gymnopedie No. 2” Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0 License
    http://creativecommons.org/licenses/by/4.0/

    “Gymnopedie No. 3” Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0 License
    http://creativecommons.org/licenses/by/4.0/

    “Snowdrop” Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 4.0 License
    http://creativecommons.org/licenses/by/4.0/

    Airport Lounge Kevin MacLeod (incompetech.com)
    Licensed under Creative Commons: By Attribution 3.0 License
    http://creativecommons.org/licenses/by/3.0/

    High Five font by Nick Curtis (https://www.1001fonts.com/high-five-font.html)
    Licensed under 1001Fonts Free For Commercial Use License
    https://www.1001fonts.com/licenses/ffc.html

    Syouwa Retro Pop Font by gomarice (https://www.1001fonts.com/syouwa-retro-pop-font.html)
    Licensed under 1001Fonts Free For Commercial Use License
    https://www.1001fonts.com/licenses/ffc.html

    Visit original content creator repository
    https://github.com/11BelowStudio/GGJ-2021

  • Smallbrain

    Smallbrain

    History

    During the pandemic I rediscovered chess and played a lot of games with my friends.
    Then I started to program my first engine python-smallbrain in python, with the help of python-chess.

    I quickly realized how slow python is for chess engine programming, so I started to learn C++.
    My first try was cppsmallbrain, though after some time I found the code very buggy and ugly.
    So I started Smallbrain from scratch, during that time, I also joined Stockfish development.

    After some time I began implementing a NNUE into Smallbrain, with the help of Luecx from Koivisto.
    As of now Smallbrain has a NNUE trained on 1 billion depth 9 + depth 7 + 150 million depth 9 DFRC fens generated with the built in data generator and using CudAD trainer to ultimately train the network.

    News

    The latest development versions support FRC/DFRC.

    Compile

    Compile it using the Makefile

    make -j
    .\smallbrain.exe bench
    

    compare the Bench with the Bench in the commit messages,
    they should be the same.

    or download the latest the latest executable directly over Github.
    At the bottom you should be able to find multiple different compiles, choose one that doesnt crash.

    Ordered by performance you should try x86-64-avx2 first then x86-64-modern and at last x86-64.
    If you want maximum performance you should compile Smallbrain yourself.

    Elo

    Name Elo +
    Smallbrain 7.0 3537 +13 −13
    Smallbrain dev-221204 3435 +15 −15

    Name Elo +
    Smallbrain 7.0 64-bit 4CPU 3374 +20 −20
    Smallbrain 7.0 64-bit 3309 +15 −15
    Smallbrain 6.0 4CPU 3307 +23 −23
    Smallbrain 6.0 3227 +23 −23
    Smallbrain 5.0 4CPU 3211 +23 −23
    Smallbrain 5.0 3137 +20 −20
    Smallbrain 4.0 2978 +25 −25
    Smallbrain 2.0 2277 +28 −29
    Smallbrain 1.1 2224 +29 −30

    Name Elo +
    Smallbrain 7.0 64-bit 8CPU 3581 +30 −29
    Smallbrain 7.0 64-bit 3433 +14 −14
    Smallbrain 6.0 3336 +17 −17
    Smallbrain 5.0 3199 +18 −18
    Smallbrain 4.0 3005 +18 −18
    Smallbrain 3.0 2921 +20 −20
    Smallbrain 1.1 2174 +20 −20

    no Program Elo + Games Score Av.Op. Draws
    32 Smallbrain 7.0 avx2 3445 6 6 10000 46.7% 3469 63.0%
    34 Smallbrain 6.0 avx2 3345 7 7 9000 52.1% 3331 49.9%

    no Program Elo + Games Score Av.Op. Draws
    217 Smallbrain 7.0 x64 1CPU 3296 14 14 1596 50.7% 3291 63.4%
    271 Smallbrain 6.0NN x64 1CPU 3203 16 16 1300 42.8% 3258 51.2%

    UCI settings

    • Hash
      The size of the hash table in MB.
    • Threads
      The number of threads used for search.
    • EvalFile
      The neural net used for the evaluation,
      currently only default.nnue exist.
    • SyzygyPath
      Path to the syzygy files.
    • UCI_ShowWDL
      Shows the WDL score in the UCI info.
    • UCI_Chess960
      Enables Chess960 support.

    Engine specific uci commands

    • go perft <depth>
      calculates perft from a set position up to depth.
    • print
      prints the current board
    • eval
      prints the evaluation of the board.

    CLI commands

    • bench
      Starts the bench.
    • perft fen=<fen> depth=<depth>
      fen and depth are optional.
    • -eval fen=<fen>
    • -version/–version/–v/-v
      Prints the version.
    • -see
      Calculates the static exchange evaluation of the current position.
    • -generate
      Starts the data generation.
    • -tests
      Starts the tests.

    Features

    • Evaluation
      • As of v6.0 the NNUE training dataset was regenerated using depth 9 selfplay games + random 8 piece combinations.

    Datageneration

    • Starts the data generation.

      -generate
      
    • Specify the number of threads to use.
      default: 1

      threads=<int>
      
    • If you want to start from a book instead of using random playout.
      default: “”

      book=<path/to/book>
      
    • Path to TB, only used for adjudication.
      default: “”

      tb=<path/to/tb>
      
    • Analysis depth, values between 7-9 are good.
      default: 7

      depth=<int>
      
    • Analysis nodes, values between 2500-10000 are good.
      default: 0

      nodes=<int>
      
    • The amount of hash in MB. This gets multiplied by the number of threads.
      default: 16

      hash=<int>
      
    • Example:

    .\smallbrain.exe -generate threads=30 book=E:\Github\Smallbrain\src\data\DFRC_openings.epd tb=E:/Chess/345
    

    .\smallbrain.exe -generate threads=30 depth=7 hash=256 tb=F:\syzygy_5\3-4-5
    .\smallbrain.exe -generate threads=30 depth=9 tb=H:/Chess/345
    .\smallbrain.exe -generate threads=30 nodes=5000 tb=H:/Chess/345
    

    Acknowledgements

    I’d also like to thank the following people for their help and support:

    • A big thanks to Luecx for his amazing CudAd trainer and his help with the NNUE implementation.
    • Andrew Grant for the OpenBench platform https://github.com/AndyGrant/OpenBench
    • Morgan Houppin, author of Stash https://github.com/mhouppin/stash-bot for his debug sessions.
    • Various other people from Stockfish discord for their help.
    • Chess.com for their Smallbrain inclusion in the Computer Chess Championship (CCC)
    • TCEC for their Smallbrain invitation.

    Engines

    The following engines have taught me a lot about chess programming and I’d like to thank their authors for their work:

    Tools

    Included:
    The following parts of the code are from other projects, I’d like to thank their authors for their work and their respective licenses remain the same:

    External:

    Visit original content creator repository
    https://github.com/Disservin/Smallbrain

  • perltidy-more

    perltidy-more

    Perltidy extension for Visual Studio Code.

    More perltidy, then pertidy extension by sfodje.

    This perltidy has some extended features:

    • It has github repository (now sfodje perltidy has repository too).
    • It can format large perl files (in my case sfodje extension had 10 or 20 KB file limit. I don’t know why it happened).
    • It can format selected text.
    • Partial support for virtual filesystems like SSH FS (without support of .perltidyrc from virtual fs).
    • Option to enable perltidy only with existing .perltidyrc in project.
    • FormatOnType support (you can enable it in settings).
    • Support for relative path to perltidy binary. Set perltidy-more.executable to relative path and it will be search it in workspace folder.

    Alternatives

    1. sfodje perltidy.
    2. henriiik intelligence extension (it can format, but I couldn’t get it work).

    Attention

    VS Code can have multiple formatting extensions for same language installed, but only one of them (selected by some magical “score”) will be using for formatting by formatting key.

    If this extension does not work:

    1. Try to use it with command (F1 or Ctrl+Shift+P: perltidy).
    2. Try to disable other perl formatting extensions.
    3. Try to install perltidy binary from your OS repository.

    FAQ

    1. Q: I’d like to use .perltidyrc specific to different projects.

    A: Use “perltidy-more.profile” option and set it to “…/.perltidyrc”. Three dots is perltidy specific option to indicates that the file should be searched for starting in the current directory and working upwards. This makes it easier to have multiple projects each with their own .perltidyrc in their root directories.

    2. Q: I’d like to run perltidy in docker container.

    A: Use shell script like this and set it as perltidy-more.executable in options

    #!/usr/bin/env sh
    exec docker run --rm -i -v "$PWD":/app -w /app avastsoftware/perltidy "$@"
    

    Visit original content creator repository
    https://github.com/kak-tus/perltidy-more