Author: 35h0p9no84bg

  • ipyvolume

    ipyvolume

    Join the chat at https://gitter.im/maartenbreddels/ipyvolume Documentation Version Anaconda-Server Badge Coverage Status Build Status

    Try out in mybinder: Binder

    3d plotting for Python in the Jupyter notebook based on IPython widgets using WebGL.

    Ipyvolume currently can

    • Do (multi) volume rendering.
    • Create scatter plots (up to ~1 million glyphs).
    • Create quiver plots (like scatter, but with an arrow pointing in a particular direction).
    • Render isosurfaces.
    • Do lasso mouse selections.
    • Render in the Jupyter notebook, or create a standalone html page (or snippet to embed in your page).
    • Render in stereo, for virtual reality with Google Cardboard.
    • Animate in d3 style, for instance if the x coordinates or color of a scatter plots changes.
    • Animations / sequences, all scatter/quiver plot properties can be a list of arrays, which can represent time snapshots.
    • Stylable (although still basic)
    • Integrates with

    Ipyvolume will probably, but not yet:

    • Render labels in latex.
    • Show a custom popup on hovering over a glyph.

    Documentation

    Documentation is generated at readthedocs: Documentation

    Screencast demos

    Animation

    screencast

    (see more at the documentation)

    Volume rendering

    screencast

    Glyphs (quiver plots)

    screencast quiver

    Installation

    If you want to use Jupyter Lab, please use version 3.0.

    Using pip

    Advice: Make sure you use conda or virtualenv. If you are not a root user and want to use the --user argument for pip, you expose the installation to all python environments, which is a bad practice, make sure you know what you are doing.

    $ pip install ipyvolume
    

    Conda/Anaconda

    $ conda install -c conda-forge ipyvolume
    

    Pre-notebook 5.3

    If you are still using an old notebook version, ipyvolume and its dependend extension (widgetsnbextension) need to be enabled manually. If unsure, check which extensions are enabled:

    $ jupyter nbextension list
    

    If not enabled, enable them:

    $ jupyter nbextension enable --py --sys-prefix ipyvolume
    $ jupyter nbextension enable --py --sys-prefix widgetsnbextension
    

    Pip as user: (but really, do not do this)

    You have been warned, do this only if you know what you are doing, this might hunt you in the future, and now is a good time to consider learning virtualenv or conda.

    $ pip install ipyvolume --user
    $ jupyter nbextension enable --py --user ipyvolume
    $ jupyter nbextension enable --py --user widgetsnbextension
    

    Developer installation

    $ git clone https://github.com/maartenbreddels/ipyvolume.git
    $ cd ipyvolume
    $ pip install -e . notebook jupyterlab
    $ (cd js; npm run build)
    $ jupyter nbextension install --py --overwrite --symlink --sys-prefix ipyvolume
    $ jupyter nbextension enable --py --sys-prefix ipyvolume
    # for jupyterlab (>=3.0), symlink share/jupyter/labextensions/bqplot-image-gl
    $ jupyter labextension develop . --overwrite
    

    Developer workflow

    Jupyter notebook (classical)

    Note: There is never a need to restart the notebook server, nbextensions are picked up after a page reload.

    Start this command:

    $ (cd js; npm run watch)
    

    It will

    • Watch for changes in the sourcecode and run the typescript compiler for transpilation of the src dir to the lib dir.
    • Watch the lib dir, and webpack will build (among other things), ROOT/ipyvolume/static/index.js.

    Refresh the page.

    Visit original content creator repository https://github.com/widgetti/ipyvolume
  • unemployment

    Unemployment: Course Portal

    This repository is the portal for the course “Unemployment” taught by Pascal Michaillat at UC Santa Cruz. The course ID is ECON 182. The course portal contains the syllabus, provides a discussion forum, and hosts other course resources.

    Course webpage

    The course materials are available at https://pascalmichaillat.org/v/.

    Portal content

    • Syllabus for Winter 2025
    • Presentation schedule for Winter 2025
    • Lecture handouts – The folder contains handouts distributed in lecture. The handouts are designed to help you develop your research ideas and collect questions about the lecture videos.
    • Discussion forum – This collaborative discussion forum is designed to get you help quickly and efficiently. You can ask and answer questions, share updates, have open-ended conversations, and follow along course announcements.
    • Reading material – The folder contains book chapters and articles that may be hard to find online.
    • Lecture material – The folder contains discussions from lecture.
    • Section material – The folder contains material from section.
    • Presentations – The folder contains all the student presentations given during the quarter, and some presentation templates and examples.
    • LaTeX code for presentation slides – Complete code to produce a presentation with LaTeX. Just upload the files to Overleaf and start writing your slides!
    • LaTeX code for research paper – Complete code to produce a research paper with LaTeX. Just upload the files to Overleaf and start writing your paper!

    License

    This repository is licensed under the Creative Commons Attribution 4.0 International License.

    Visit original content creator repository
    https://github.com/pmichaillat/unemployment

  • TextProcessing

    Icon

    Project Overview

    TLDR; Text extraction, transcription, punctuation restoration, translation, summarization and text to speech

    The goal of this project is to extend the functionalities of Fabric. I’m particularly interested in building pipelines using utilities like yt as a source and chaining them with the | operator in CI.

    However, a major limitation exists: all operations are constrained by the LLM context. For extracting information from books, lengthy documents, or long video transcripts, content may get truncated.

    To address this, I started working on adding a summarization step before applying a fabric template, based on the document length. Additionally, I explored capabilities like transcripting, translating and listening to the pipeline result or saving it as an audio file for later consumption.

    Examples

    Listen to the condensed summary of a long Youtube video

    yt --transcript url | tp --cb | tts

    Read a web page summary

    tp --ebullets https://en.wikipedia.org/wiki/Text_processing

    Listen to the condensed French summary of a long English Youtube video

    yt --transcript --lang en url | tp --cb --tr fr | tts

    Save a book’s wisdom as an audio file

    tp my_book.txt --eb | fabric --p extract_wisdom | tts --o my_book_wisdom.mp3

    Say “hello world!” in Chinese

    echo "Hello world!" | tp --tr zh | tts

    Translate a document to Spanish

    tp doc_fr.txt --tr es > doc_es.txt

    Generate a transcript in any language from a mp4 file. E.G.: from English to French

    tp en.mp4 --tr fr

    Listen in spanish a French audio file

    tp fr.mp3 --tr es | tts

    Convert a spanish audio book to a French audio book… and make an English transcript

    tp es.mp3 --tr fr | tts --o fr.mp3 | tp fr.mp3 --tr en --o tr_en.txt

    Extract ideas from an audio file, save them in a French text file

    tp en.mp3 | fabric --p extract_ideas | tp --tr fr --o idées.txt

    Perform OCR

    tp image.png

    Extracts text from a Word file

    tp document.docx

    Text Processing (tp)

    Input (text or audio file)

    tp receives from stdin or as first command line argument It accepts:

    • Text.
    • File path. Supported formats are: .aiff, .bmp, .cs, .csv, .doc, .docx, .eml, .epub, .flac, .gif, .htm, .html, .jpeg, .jpg, .json, .log, .md, .mkv, .mobi, .mp3, .mp4, .msg, .odt, .ogg, .pdf, .png, .pptx, .ps, .psv, .py, .rtf, .sql, .tff, .tif, .tiff, .tsv, .txt, .wav, .xls, .xlsx

    tp accepts unformatted content, such as automatically generated YouTube transcripts. If the text lacks punctuation, it restores it before further processing, which is necessary for chunking and text-to-speech operations.

    Transcription

    Converts audio and video files to text using Whisper.

    Summarization

    The primary aim is to summarize books, large documents, or long video transcripts using an LLM with an 8K context size. Various summarization levels are available:

    Extended Bullet Summary (--ebullets, --eb )

    • Splits text into chunks.
    • Summarizes all chunks as bullet points.
    • Concatenates all bullet summaries.

    The goal is to retain as much information as possible.

    Condensed Bullet Summary (--cbullets, --cb)

    Executes as many extended bullet summary phases as needed to end up with a bullet summary smaller than an LLM context size.

    Textual Summary (--text, --t)

    A simple summarization that does not rely on bullet points.

    Translation (--translate, --tr)

    Translates the output text to the desired language. Use two letters code such as en or fr.

    Usage

    usage: tp [-h] [--ebullets] [--cbullets] [--text] [--lang LANG] [--translate TRANSLATE] [--output_text_file_path OUTPUT_TEXT_FILE_PATH] [text_or_path]
    
    tp (text processing) provides transcription, punctuation restoration, translation and summarization from stdin, text, url, or file path. Supported file formats are: .aiff, .bmp, .cs, .csv, .doc, .docx, .eml, .epub, .flac, .gif, .htm, .html, .jpeg, .jpg, .json, .log, .md, .mkv, .mobi, .mp3, .mp4, .msg, .odt, .ogg, .pdf, .png, .pptx, .ps, .psv, .py, .rtf, .sql, .tff, .tif, .tiff, .tsv, .txt, .wav, .xls, .xlsx
    
    positional arguments:
      text_or_path          plain text; file path; file url
    
    options:
      -h, --help            show this help message and exit
      --ebullets, --eb      Output an extended bullet summary
      --cbullets, --cb      Output a condensed bullet summary
      --text, --t           Output a textual summary
      --lang LANG, --l LANG
                            Forced processing language. Disables the automatic detection.
      --translate TRANSLATE, --tr TRANSLATE
                            Language to translate to
      --output_text_file_path OUTPUT_TEXT_FILE_PATH, --o OUTPUT_TEXT_FILE_PATH
                            output text file path
    

    Text To Speech (tts)

    Listen to the pipeline result or save it as an audio file to listen later.

    tts can also read text files, automatically detecting their language.

    usage: tts.py [-h] [--output_file_path OUTPUT_FILE_PATH] [--lang LANG] [input_text_or_path]
    
    tts (text to speech) reads text aloud or to mp3 file
    
    positional arguments:
      input_text_or_path    Text to read or path of the text file to read.
    
    options:
      -h, --help            show this help message and exit
      --output_file_path OUTPUT_FILE_PATH, --o OUTPUT_FILE_PATH
                            Output file path. If none, read aloud.
      --lang LANG, --l LANG
                            Forced language. Uses language detection if not provided.
    

    Environment setup

    .env file

    GROQ_API_KEY=gsk_
    LITE_LLM_URI='http://localhost:4000/'
    SMALL_CONTEXT_MODEL_NAME="groq/llama3-8b-8192"
    SMALL_CONTEXT_MAX_TOKENS=8192
    

    script short hand

    • Make script executable chmod +x tts.py

    • Create symlink : Link the script to a directory that’s in your PATH sudo ln -s tts.py /usr/local/bin/tts

    Visit original content creator repository https://github.com/Gauff/TextProcessing
  • tiny-timer

    tiny-timer

    npm Build Status Dependency status downloads license

    Small countdown timer and stopwatch module.

    Installation

    npm:

    $ npm install tiny-timer

    Yarn:

    $ yarn add tiny-timer

    Example

    const Timer = require('tiny-timer')
    
    const timer = new Timer()
    
    timer.on('tick', (ms) => console.log('tick', ms))
    timer.on('done', () => console.log('done!'))
    timer.on('statusChanged', (status) => console.log('status:', status))
    
    timer.start(5000) // run for 5 seconds

    Usage

    timer = new Timer({ interval: 1000, stopwatch: false })

    Optionally set the refresh interval in ms, or stopwatch mode instead of countdown.

    timer.start(duration [, interval]) {

    Starts timer running for a duration specified in ms. Optionally override the default refresh interval in ms.

    timer.stop()

    Stops timer.

    timer.pause()

    Pauses timer.

    timer.resume()

    Resumes timer.

    Events

    timer.on('tick', (ms) => {})

    Event emitted every interval with the current time in ms.

    timer.on('done', () => {})

    Event emitted when the timer reaches the duration set by calling timer.start().

    timer.on('statusChanged', (status) => {})

    Event emitted when the timer status changes.

    Properties

    timer.time

    Gets the current time in ms.

    timer.duration

    Gets the total duration the timer is running for in ms.

    timer.status

    Gets the current status of the timer as a string: running, paused or stopped.

    Visit original content creator repository https://github.com/mathiasvr/tiny-timer
  • Install-DeepSeek-R1-with-Ollama-on-Ubuntu-Server—Automated-Bash-Script

    Install-DeepSeek-R1-with-Ollama-on-Ubuntu-Server—Automated-Bash-Script

    1. Open Terminal:
      Access the terminal on your Ubuntu server. You can use SSH if you are connecting to a remote server.

    2. Install Git (If Not Already Installed):
      Make sure Git is installed on your system. If it is not, you can install it with the following commands:
      sudo apt update
      sudo apt install git -y

    3. Clone the GitHub Repository:
      Use the git clone command to download the repository containing the script. Replace REPOSITORY_URL with the appropriate GitHub repository URL. For example:
      git clone https://github.com/jamaludin1991/Install-DeepSeek-R1-with-Ollama-on-Ubuntu-Server—Automated-Bash-Script.git
      After running this command, a new folder with the repository name will be created in the current directory.

    4. Navigate to the Repository Directory:
      Change into the directory of the cloned repository:
      cd repo-name

    5. Make the Script Executable:
      Find the script file you want to run (e.g., install_deepseek_r1.sh) and make it executable:
      chmod +x install_deepseek_r1.sh

    6. Run the Script with Sudo:
      Execute the script using sudo to provide administrative privileges:
      sudo ./install_deepseek_r1.sh

    7. Wait for the Process to Complete:
      The script will start running. Wait until all steps are completed. You will see output in the terminal indicating the progress of the installation.

    8. Verify the Installation:
      After the script finishes, you can verify that the DeepSeek-R1 model has been successfully downloaded by running:
      ollama list

    9. Running the Model:
      To run the DeepSeek-R1 model, use the following command:
      ollama run deepseek-r1

    Additional Notes:
    Ensure you have a stable internet connection during the installation process, as the script will download Ollama and the DeepSeek-R1 model.
    If you encounter errors during installation, check the error messages in the terminal for more information about what might have gone wrong.
    If the DeepSeek-R1 model is not available, make sure to check the correct model name using the command ollama list.

    Visit original content creator repository
    https://github.com/jamaludin1991/Install-DeepSeek-R1-with-Ollama-on-Ubuntu-Server—Automated-Bash-Script

  • hidden-markov-model

    Hidden Markov Model (HMM) Trading Project

    https://colab.research.google.com/drive/1r7XeSxH5v–EfhCDpjZmJ_mIWz9IfoyM#scrollTo=1p-u99S5ZxEf

    Overview

    This repository contains implementations of several Hidden Markov Models (HMM) designed to analyze trading data with various levels of indicator integration and correction methods. The models achieve different performance accuracies, with some versions reaching up to 97% accuracy based on backtesting metrics.

    Installation

    1. Clone the repository:

      git clone https://github.com/rainerigius/hidden-markov-model.git
    2. Install dependencies: To ensure proper functionality, install the required packages:

      pip install -r requirements.txt

    Essential Packages

    • hmmlearn: For training and evaluating Hidden Markov Models.
    • numpy, pandas: For data manipulation and numerical operations.
    • joblib: For saving and loading model files.
    • scikit-learn: For data preprocessing, scaling, and other utility functions.
    • matplotlib, seaborn: For data visualization.

    Files and Scripts

    Main HMM Scripts

    The repository includes several Python scripts, each implementing a different HMM with varying configurations of indicators and metrics:

    • hmm_87%30_ind+_correction.py: Implements an HMM model with 87% accuracy, utilizing 30 indicators and a correction method.
    • hmm_87%_with_30_indicators.py: HMM model achieving 87% accuracy with 30 indicators.
    • hmm_88%.py: An HMM model with a slightly higher accuracy of 88%.
    • hmm_97%_updated_metrics.py: A refined version of the 97% accuracy model, with updated metrics and performance improvements.
    • hmm_97%.py: A previous version of the 97% accuracy model.
    • hmm_d_97%.py: Another version of the 97% accuracy model, potentially using different datasets or indicators.

    Old Template

    • oldtemplate.py: The original template for implementing HMM models, which provides a foundational structure for building more advanced models.

    Additional Tools and Data Files

    • liquidity.py: Script that calculates liquidity metrics for trading data.
    • oos_test.py: Out-of-sample testing script for evaluating model performance on unseen data.
    • state_transition_diagram: Visualization of the state transitions for the HMM models.

    Datasets

    The repository includes a few CSV files that contain sample data:

    • btc.csv, BTC_1H.csv, csv/BTC_2H.csv, etc.: Bitcoin price data at various timeframes.
    • data/bitcoin_state_changes.csv: Data capturing state transitions for Bitcoin, likely used in the HMM training process.

    Model Files

    Pre-trained models are saved in the models directory with joblib:

    • model_hmm_85%_30ind_updated.joblib: A pre-trained HMM model with 85% accuracy using 30 indicators.
    • model_hmm_88%.joblib: A pre-trained HMM model with 88% accuracy.
    • model_hmm_98%.joblib: A highly accurate pre-trained HMM model with 98% accuracy.

    How to Use the HMM Models

    Each model script can be executed directly or used as part of a larger analysis pipeline. For example, to run the hmm_97%_updated_metrics.py model, execute:

    python hmm_97%_updated_metrics.py

    Results are printed to the console or saved in designated output files for review.

    Notes on HMMs and Project Structure

    Hidden Markov Models (HMMs) are statistical models that assume the system being modeled is a Markov process with hidden states. In this project, the HMMs are trained on historical trading data, aiming to predict price movements based on various indicators. Each HMM script uses different sets of indicators and configurations to optimize performance. Accuracy percentages indicate the effectiveness of each model based on backtesting metrics.

    Visit original content creator repository
    https://github.com/rainerigius/hidden-markov-model

  • Principles-of-Big-Data-Management

    Principles of Big Data Management : Disease Analysis

    1. About the Project

    We choose ‘Diseases’ as our topic to do big data analysis. Based on twitter tweets, we predicted some interesting analysis on Diseases using thousands of tweets tweeted by different people. First we collected the tweets from twitter API based on some key words related to Disease. After that, we analyzed the data that we have collected. By using the analysis, we written some interesting SQL queries useful to give a proper result for the analysis.

    2. System Architecture

    First we generated credential for accessing twitter. By using these credentials, we wrote a python program to collect twitter tweets based on keywords related to food. Tweets were stored in a text file in a JSON format. We will give these JSON file to SQL queries for analysis with Spark, Intellij with Scala program with queries.

    3. Analyzing Twitter Data

    Query 1: Popular Tweets on Different Diseases

    In this query, we are fetching the diseases and its tweets count in the file. This query is written using RDD, where we are fetching the count of diseases using hashtags using filter and the count is printed further.

    Query 2: Countries that tweeted more on Diseases (Google Maps)

    In this query, the top countries that tweeted more on diseases is fetched. First the location in tweets are fetched from tweets file and count is displayed as shown below. The data is stored in .csv format and the file is read and Visualization is done on Google Maps.

    Query 3: Popular Hashtags

    In this query, we took popular hash tags text file from blackboard and performed JOIN operation with hash tags from diseases tweets file. The fetched data is stored in .csv format to do visualization.

    Query 4: Most Popular Tweeted Words

    In this query, most occurring words in tweets on diseases is fetched. On the fetched data visualization is done dynamically.

    Query 5: On which day of week, more tweets are done on diseases

    In this query, data is fetched based on which day of week more tweets are done on Diseases. Initially created_at is fetched from tweets file and count of tweets is done on each day of week.

    Query 6: Top 10 Users Tweeted on Diseases

    In this query the we are fetching top 10 users who tweeted more on diseases. This query is written using RDD. Initially for each disease, the top tweeted user is fetched and UNION RDD is used to club all the diseases. The results are stored in .csv file to do visualization

    Query 7: Follower Id’s count using Twitter API

    Twitter Get Followers ids API is used. A query to display five screen names from the tweets file is written. When the query is executed a table with ten screen names is displayed in the table.

    Val request = new HttpGet(“https://api.twitter.com/1.1/followers/ids.json?cursor=-1&screen_name=” + name)

    First the user is given a Choice to enter a screen name of his choice. Once the screen name has been inputted the follower’s id

    Once screen name RevistaCOFEPRIS is entered the follower id’s count are displayed as shown below

    4. Related Links

    Phase-1 Document: https://github.com/cmoulika009/Principles-of-Big-Data-Management/blob/master/PB%20Phase-1-%20Team%2011/PRINCIPLES%20OF%20BIG%20DATA%20MANAGEMENT%20PHASE%201.pdf

    Phase-2 Document: https://github.com/cmoulika009/Principles-of-Big-Data-Management/blob/master/PB%20Phase-2-%20Team%2011/PB%20Phase-2%20Team-11.pdf

    Final Project Document: https://github.com/cmoulika009/Principles-of-Big-Data-Management/blob/master/PB%20Phase-3-%20Team-11/PB%20Phase-3%20Team-11.pdf

    Tweet Location: https://www.dropbox.com/s/04zebrisw6jm6n0/Disease_Tweets.json?dl=0

    Youtube Video: https://youtu.be/dRO-2chnycM

    Visit original content creator repository https://github.com/cmoulika009/Principles-of-Big-Data-Management
  • moonsideProductions

    I am no longer activly maintaing this theme, I will try to check out pull requests if possible

    hugo-uno

    A responsive hugo theme with awesome font’s, charts and light-box galleries, the theme is based on Uno for ghost.
    A example site is available at hugouno.fredrikloch.me

    A Swedish translation is available in the branch feature/swedish

    Usage

    The following is a short tutorial on the usage of some features in the theme.
    Configuration

    To take full advantage of the features in this theme you can add variables to your site config file, the following is the example config from the example site:

    languageCode = "en-us"
    contentdir = "content"
    publishdir = "public"
    builddrafts = false
    baseurl = "http://fredrikloch.me/"
    canonifyurls = true
    title = "Fredrik Loch"
    author = "Fredrik Loch"
    copyright = "This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License."
    
    
    [indexes]
       category = "categories"
       tag = "tags"
    [Params]
      AuthorName = "Fredrik"
      github = "Senjindarashiva"
      bitbucket = "floch"
      flickr = "senjin"
      twitter = "senjindarshiva"
      email = "mail@fredrikloch.me"
      description = ""
      cv = "/pages/cv"
      legalnotice = "/pages/legal-notice"
      muut = "fredrikloch"
      linkedin = "fredrikloch"
      cover = "/images/background-cover.jpg"
      logo = "/img/logo-1.jpg"
    

    If you prefer to use discourse replace the “muut” line with the following(remember the trailing slash)

      discourse = "http://discourse.yoursite.com/"
    

    If you prefer to use disqus replace the “muut” line with the following

      disqus = "disqusUsername"
    

    Charts

    To create charts I use Chart.js which can be configured through basic js files. To add a chart to a post use the following short-code:

    {{% chart id="basicChart" width=860 height=400 js="../../js/chartData.js" %}}
    

    Where the javascript file specified contains the data for the chart, a basic example could look like this:

    
    $(function(){
      var chartData = {
          labels: ["Jekyll", "Hugo", "Wintersmith"],
          datasets: [
              {
                  label: "Mean build time",
                  fillColor: "#E1EDD7",
                  strokeColor: "#E1EDD7",
                  highlightFill: "#C1D8AB",
                  highlightStroke: "#C1D8AB",
                  data: [784, 100, 5255]
              }
          ]
      };
    
      var ctx = $('#basicChart').get(0).getContext("2d");
      var myBarChart = new Chart(ctx).Bar(chartData, {
          scaleBeginAtZero : true,
          responsive: true,
          maintainAspectRatio: false,
        }
      );
      })
    

    A running example can be found in my comparison between Jekyll, Hugo and Winthersmith
    Gallery

    To add a gallery to the site we use basic html together with lightGallery to create a responsive light-box gallery.

    <ul style="list-style: none;" id="lightGallery">
        <li data-src="https://github.com/RMBLRX/pathToImg.jpg">
            <img src="pathToThumb.jpg"></img>
        </li>
        <li data-src="https://github.com/RMBLRX/pathToImg.jpg">
            <img src="pathToThumb.jpg"></img>
        </li>
    </ul>
    
    <script src=../../js/lightGallery.min.js></script>
    <script>
        $("#lightGallery").lightGallery();
    </script>
    

    Features

    Cover page
    The landing page for Hugo-Uno is a full screen ‘cover’ featuring your avatar, blog title, mini-bio and cover image.

    Built with SASS, using BEM
    If you know HTML and CSS making modifications to the theme should be super simple.

    Responsive
    Hugo-Uno looks great on all devices, even those weird phablets that nobody buys.

    Moot comments
    Moot integration allows users to comment on your posts.

    Font-awesome icons
    For more information on available icons: font-awesome

    No-JS fallback
    While JS is widely used, some themes and websites don’t provide fallback for when no JS is available (I’m looking at you Squarespace). If for some weird reason a visitor has JS disabled your blog will still be usable.

    License

    Creative Commons Attribution 4.0 International

    Development

    In order to develop or make changes to the theme you will need to have the sass compiler and bourbon both installed.

    To check installation run the following commands from a terminal and you should see the > cli output but your version numbers may vary.

    ** SASS **

    sass -v
    > Sass 3.3.4 (Maptastic Maple)

    If for some reason SASS isn’t installed then either follow the instructions from the Sass install page or run bundle install in the project root.

    ** Bourbon **

    bourbon help
    > Bourbon 3.1.8

    If Bourbon isn’t installed follow the installation instructions on the Bourbon website or run bundle install in the project root.

    Once installation is verified we will need to go mount the bourbon mixins into the scss folder.

    From the project root run bourbon install with the correct path

    bourbon install --path static/scss
    > bourbon files installed to static/scss/bourbon/

    Now that we have the bourbon mixins inside of the scss src folder we can now use the sass cli command to watch the scss files for changes and recompile them.

    sass --watch static/scss:static/css
    >>>> Sass is watching for changes. Press Ctrl-C to stop.

    To minify the css files use the following command in the static folder

    curl -X POST -s --data-urlencode 'input@static/css/uno.css' http://cssminifier.com/raw > static/css/uno.min.css

    Visit original content creator repository
    https://github.com/RMBLRX/moonsideProductions

  • LCL

    Login Controll for Oracle

    Oracle SQL and PL/SQL solution to controll logins

    Why?

    I have two reasons:

    1. to refuse the unauthorized logins, and
    2. log the attempts

    How?

    There is a logon trigger which checks the

    • Oracle user
    • OS user
    • IP address of the client
    • Program / Application

    If the login allowed then goes on, but if it did not, then logs the data and raise an error.

    For DBA roled users the login is allowed all the time despite the trigger is invalid nor raises an error.

    There is a table to controll the logins:

      ORACLE_USER             VARCHAR2 (   400 )
      OS_USER                 VARCHAR2 (   400 )
      IP_ADDRESS              VARCHAR2 (   400 )
      PROGRAM                 VARCHAR2 (   400 )
      ENABLED                 CHAR     (     1 )     Y or N

    This table contains the valid user/client/program combinations.
    The column values will use with LIKE, so it can be pattern.
    i.e. “%” means “every” user/IP address/program e.t.c.
    But ‘%’,’%’,’%’,’%’,’Y’ means anybody from anywhere, and this overwrites any other rules!
    The refused logon data will be logged into LCL_LOG table.
    There is an ENABLED column in the LCL_TABLE too, so you can disabled the logins anytime to set this value to “N”.

    The whole solution is not too complicated, so see the install script file for more details!

    Visit original content creator repository
    https://github.com/frankiechapson/LCL

  • QRealTime

    Welcome to QRealTime Plugin

    flowchart

    QRealTime Plugin allows you to:

    • Create new survey form directly from GIS layers in QGIS
    • Synchronise data from ODK Aggregate, KoboToobox, and ODK Central servers
    • Import data from server

    Getting Started

    Installation

    Prerequisites:
    • QGIS installed

    Installation steps:

    1. Open Plugin Manager and search for QRealTime plugin and install it.
    2. And restart QGIS so that changes in environment take effect.

    Configuration:

    From the main menu choose Plugins –> QRealTime –> QRealTime Setting
    Here you have three tabs one for Aggregate, KoboToolBox, and Central Choose one of the tabs and Enter url (required).
    For Kobo server url can be:
    https://kobo.humanitarianresponse.info/ or https://kf.kobotoolbox.org/ for humanitarian and researcher account respectively
    Other fields are optional.
    You can create a free account in KoboToolbox here
    You can set up ODK Central here
    QRealTimePic

    Using the Plugin:


    Right click over any existing layer –> QRealTime and choose desired option:
    Make Online (to create new form), import (to import data of existing form), sync (to automatically update your layer)

    options


    QRealTime plugin is capable of converting QGIS layer into data collection form. To design a data collection form for humanitarian crisis, we have to create appropriate vector layer. For the demonstration purpose, you can create the shapefile with following fields:

    tables

    Resources:


    If you are not sure how to create value map in QGIS, visit this link
    For a tutorial on how to use the QRealTime Plugin, check out this video :
    Visit original content creator repository https://github.com/shivareddyiirs/QRealTime