Current external codebases, sorted alphabetically

  • External codebases have been gathered manually with RADAR from online sources. RADAR is not spidering yet and we have not yet automatically processed all systems for descriptions, hence only some descriptions are displayed.

2013-advent-staging-20200204


The Catalyst Advent Calendar is using the [POD](http://perldoc.perl.org/perlpod.html) format. For each day of the month there is a corresponding pod file into the `root` directory. If you don't feel comfortable to write the article using the POD format, don't worry. Into the `examples/` directory of this repository there are few examples from previous years.

aaaviewer-20190304


What's this? ============= ActAffAct is the product of the master's thesis of Stefan Rank. It is a small proof of concept program that extends a BDI architecture with an appraisal component. It tries to demonstrate the applicability of such an architecture to the area of emergent narrative.

aafid2-latest-20180707


AAFID(tm) is a distributed monitoring and intrusion detection system that employs small stand-alone programs (Agents) to perform monitoring functions in the hosts of a network. AAFID uses a hierarchical structure to collect the information produced by each agent, by each host, and by each set of hosts, to be able to detect suspicious activity.

abcl-bin-1.5.0


Armed Bear Common Lisp is a conforming implementation of ANSI Common Lisp that runs in a Java virtual machine. It compiles Lisp code directly to Java byte code.

abcl-src-1.5.0


Armed Bear Common Lisp is a conforming implementation of ANSI Common Lisp that runs in a Java virtual machine. It compiles Lisp code directly to Java byte code.

abstractive-summarization-with-transfer-learning-20200419


This creates two tfrecord files under the data folder.

accounts-assessor-20210628


This repository hosts a program that derives, validates, and corrects the financial information that it is given. The program uses redundancy to carry out its validations and corrections. By this it is meant that knowledge of parts of a company's financial data imposes certain constraints on the company's other financial data. If the program is given a company's ledger, then it knows what the balance sheet should look like. If the program is given a company's balance sheet, then it has a rough idea of what the ledger should look like.

acl-express-10.1


This information also appears in http://franz.com/products/express/modern-mode.lhtml

activetcl-8.6.6.8606


ActiveState is committed to making Tcl easy to install and use on all major platforms. This release of ActiveTcl includes the most stable versions of major extensions in binary form.

activity-prediction-20200412


There is a copy of the paper in this repository in the file called `Wilson_ACL_2019.pdf`.

adversarial-planning-20180208


This will present a list of all PDDL files in AP:domains;

aetheria-20200216


Aetheria Game Engine is a system for playing text adventure (interactive fiction) games, written in Java. Game worlds are represented in XML, with Beanshell code to account for complex object behaviour. PUCK (Playable Universe Construction Kit) is a graphical IDE that can be used to build such XML files.

ai-economist-20200807


This repo contains an implementation of Foundation, a framework for flexible, modular, and composable environments that **model socio-economic behaviors and dynamics in a society with both agents and governments**.

airis-public-20210204


AIRIS is an Artificial General Intelligence (AGI) project that combines aspects of Reinforcement Learning (RL) with more traditional symbolic techniques (GOFAI).

akira-0.9.1


GENERAL DESCRIPTION: -------------------- AKIRA is a run-time C++ multithreading and clusterable environment able to execute Software Agents and a web/system development platform to model their behaviour. The system core is made up of a server daemon that answers to AXP (KQML compliant language) requests and that executes Agent's instances. A programming interface based on a C++ MACRO LANGUAGE and some automated scripts that allows to create new Agents complete the boundle. The whole system is written in C++ with an exstensive use of templates and design patterns and integrates different C++ open source software, implementing various aspects of the framework. Various soft computing technologies are provided, Fuzzy Logic, Fuzzy Cognitive Maps, Neural Networks, Anticipatory Classifiers... Is also present an high level psychologically valid Goal Oriented Programming Language: BDI (Belief Desire Intention) Model.

ale-atari-width-20190511


This is the 0.4 release of the Arcade Learning Environment (ALE), a platform designed for AI research. ALE is based on Stella, an Atari 2600 VCS emulator. More information and ALE-related publications can be found at

alolli-20170504


ALolli is a port of Lolli to Alice extended with interprocess communication commands.

alpha-zero-general-20200411


A simplified, highly flexible, commented and (hopefully) easy to understand implementation of self-play based reinforcement learning based on the AlphaGo Zero paper (Silver et al). It is designed to be easy to adopt for any two-player turn-based adversarial game and any deep learning framework of your choice. A sample implementation has been provided for the game of Othello in PyTorch, Keras, TensorFlow and Chainer. An accompanying tutorial can be found [here](http://web.stanford.edu/~surag/posts/alphazero.html). We also have implementations for GoBang and TicTacToe.

alpprolog-20150212


ALPprolog is a Prolog implementation of an action programming language.

alpprolog-20160110


ALPprolog is a Prolog implementation of an action programming language.

alpprolog-20160401


ALPprolog is a Prolog implementation of an action programming language.

ambiverse-nlu-20190720


A list of existing pipelines can be found in `de.mpg.mpi_inf.ambiversenlu.nlu.entitylinking.uima.pipelines.PipelineType`, where you can also define new pipelines.

amr2eng-20180617


======== 1 ========= Introduction ==================== This package generates English sentences from input Abstract Meaning Representation (AMR) graphs. To do so, the code first linearizes AMR graphs into AMR strings and then uses a phrase based machine translation (PBMT) system (Moses) for "translating" AMR strings into English. The package contains a trained phrase table and tuned weights for PBMT, and uses Moses only for decoding.

amrparser-20180617


This software is an implementation of the AMR parsing strategy described in "Using Syntax-Based Machine Translation to Parse English into Abstract Meaning Representation", Pust, Hermjakob, Knight, Marcu, and May, appearing in Proc. EMNLP, 2015

apache-opennlp-1.5.3


This release contains a couple of new features, improvements and bug fixes. The CLI has been improved for a better consistency. Now the tools supports extensions that can be configured from the model, including customized context generators and validators.

apes-0.2.0


7. Add a rule: - using facts to be asked or given (with "has"): "if animal has scales and animal has cold-blood then animal is a reptile." - using existing rules (with "is a"): "if animal is a reptile and animal has enormous-size and animal has hollywood-fame then animal is a godzilla." 8. Add a fact: "hamster is a pet." or "add hamster to pet." 9. Create a new group: "create group feline."

app-shadertoy-20201215


This Perl distribution contains an offline runner for shader toys

arc-20170804


This program is a command-line based tool that can be used to analyze systems modelled using the AltaRica language.

argdown-20201211


[Argdown](https://christianvoigt.github.io/argdown) is a simple syntax for analyzing complex argumentation.

argdown-20210705


[Argdown](https://christianvoigt.github.io/argdown) is a simple syntax for analyzing complex argumentation.

arggen-candela-20210305


This repository contains code for our ACL19's paper [Argument Generation with Retrieval, Planning, and Realization](http://xinyuhua.github.io/resources/acl2019/acl2019.pdf).

argumentation-logic-visualizer-20200407


This program was created in order to explore Argumentation Logic, a concept created by Prof. Antonis Kakas, Dr. Francesca Toni and Prof. Paolo Mancarella.

arisu-20191129


# arisu arisu is a bot for discord written for [Let's all love Lain](https://discord.gg/JZwtnzJ) in python using discord.py!

atomic-data-20190204


This tarball contains the ATOMIC knowledge graph. Files present: - `v4_atomic_all_agg.csv`: contains one event per line, with all annotations aggregated into one list (but not de-duplicated, so there might be repeats). - `v4_atomic_all.csv`: keeps track of which worker did which annotations. Each line is the answers from one worker only, so there are multiple lines for the same event. - `v4_atomic_trn.csv`, `v4_atomic_dev.csv`, `v4_atomic_tst.csv`: same as above, but split based on train/dev/test split.

attempto-controlled-english-6.0.080506


  Attempto Parsing Engine for ACE 6.0
  Copyright 2008 Attempto Group, University of Zurich
  This program comes with ABSOLUTELY NO WARRANTY.
  This is free software, and you are welcome to redistribute it under certain conditions.
  Please visit http://attempto.ifi.uzh.ch for details.
  
  Command-line arguments:
  -text "TEXT"     The input ACE text. If neither -text nor -file is present then the ACE text is read from stdin.
  -file FILENAME   The input file containing the ACE text.
  -ulex FILENAME   The user lexicon file to be loaded.
  -solo OUTPUT     Output just one output component. OUTPUT has to be one of {paraphrase,paraphrase1,paraphrase2,owlfss,owlrdf,owlxml,
                   fol,pnf,tokens,syntax,drs,drsxml,drspp,drshtml,syntaxpp}.
  -cdrs            Output the DRS as a Prolog term.
  -cdrsxml         Output the DRS in XML.
  -cdrspp          Output the DRS in pretty-printed form.
  -cdrshtml        Output the DRS in pretty-printed form in HTML.
  -cparaphrase     Output a paraphrase which is a "best-effort" combination of paraphrase1 and paraphrase2.
  -cparaphrase1    Output a paraphrase which uses full sentences instead of relative clauses.
  -cparaphrase2    Output a paraphrase which uses relative clauses instead of full sentences.
  -ctokens         Output tokens as a Prolog list of lists.
  -csentences      Output sentences as a Prolog list.
  -csyntax         Output syntax trees as a Prolog list.
  -csyntaxpp       Output syntax trees in pretty-printed form.
  -cowlfss         Output OWL 2/SWRL in the Functional-Style Syntax representation.
  -cowlrdf         Output OWL 2/SWRL in the RDF/XML representation.
  -cowlxml         Output OWL 2 in the XML representation (but in case of SWRL use RDF/XML).
  -cfol            Output standard first-order logic representations (default form and prenex normal form) of the DRS as a Prolog term.
  -uri URI         URI for the OWL outputs.
  -guess           Guess the word-class of unknown words.
  -help            Shows this help page.

automates-20200626


This repository holds the source code for the AutoMATES documentation and several component pipelines.

awesome-emacs-20210514


- [[https://www.emacswiki.org/emacs/UndoTree][undo-tree]] - Visualize the whole undo history in buffer as a tree, and you can access anywhere in it. - [[https://github.com/nschum/highlight-symbol.el][highlight-symbol]] - Auto/manually highlight the same symbols in code, navigate in them, or replace string. - [[https://github.com/Fanael/rainbow-delimiters][rainbow-delimiters]] - Highlights parentheses, brackets, and braces according to their depth. - [[https://github.com/emacsmirror/rainbow-mode][rainbow-mode]] - Colorize color names in buffers. - [[https://github.com/benma/visual-regexp.el][visual-regexp]] - Replace via RegExp, with real-time visual feedback directly in the buffer. - [[https://github.com/benma/visual-regexp-steroids.el/][visual-regexp-steroids]] - The same as visual-regexp, but use modern regular expressions instead of Emacs-style. - [[https://www.emacswiki.org/emacs/WhiteSpace][whitespace]] - =[built-in]= Visualize blanks (tab/space/newline). - [[https://github.com/coldnew/linum-relative][linum-relative]] - display relative line number in the left margin in emacs. - [[https://emacsredux.com/blog/2014/08/25/a-peek-at-emacs-24-dot-4-prettify-symbols-mode/][prettify-symbol-mode]] - =[built-in]= displaying characters as fancy symbols (e.g. =lambda= -> =λ=). - [[https://github.com/jorgenschaefer/typoel][typo.el]] - Emacs extension for typographical editing. - [[https://github.com/fgeller/highlight-thing.el][highlight-thing]] - Light-weight minor mode to highlight thing under point using built-ins. - [[https://github.com/larstvei/Focus][focus]] - Dim the font color of text in surrounding paragraphs. - [[https://github.com/hlissner/emacs-solaire-mode][Solaire mode]] - Visually distinguish file-visiting windows from other types of windows (like popups or sidebars) by giving them a slightly different background. - [[https://github.com/Malabarba/beacon][beacon]] - Never lose your cursor again. - [[https://github.com/gonewest818/dimmer.el][dimmer.el]] - Interactively highlight which buffer is active by dimming the others. - [[https://github.com/k-talo/volatile-highlights.el][volatile-highlights.el]] - Minor mode for visual feedback on some operations in Emacs. - [[https://github.com/ankurdave/color-identifiers-mode][color-identifiers-mode]] - Color Identifiers is a minor mode for Emacs that highlights each source code identifier uniquely based on its name. - [[https://github.com/emacsorphanage/yascroll][yascroll-el]] - Yet Another Scroll Bar Mode. - [[https://github.com/jcs-elpa/goto-line-preview][goto-line-preview]] - Preview line when executing `goto-line` command. - [[https://github.com/tsdh/highlight-parentheses.el][highlight-parentheses.el]] - highlight surrounding parentheses. - [[https://github.com/sulami/literate-calc-mode.el][literate-calc-mode]] - display live =calc= results inline - [[https://gitlab.com/matsievskiysv/math-preview][math-preview]] - Preview TeX equations inline

awesome-knowledge-graph-20200808


* [AllegroGraph](https://franz.com/agraph/allegrograph/) - high-performance, persistent graph database that scales to billions of quads * [Apache Jena](https://jena.apache.org/) - open source Java framework for building Semantic Web and Linked Data applications * [Eclipse RDF4J](http://rdf4j.org/) - (formerly known as Sesame) is an open source Java framework for processing RDF data. This includes parsing, storing, inferencing and querying of/over such data. It offers an easy-to-use API that can be connected to all leading RDF storage solutions. It allows you to connect with SPARQL endpoints and create applications that leverage the power of linked data and Semantic Web. * [GraphDB](http://graphdb.ontotext.com/graphdb/) - enterprise ready Semantic Graph Database, compliant with W3C Standards * [Virtuoso](https://virtuoso.openlinksw.com/) - a "Data Junction Box" that drives enterprise and individual agility by deriving a Semantic Web of Linked Data from existing data silos * [Hoply](https://github.com/amirouche/hoply/) - explore bigger than RAM relational data in the comfort of Python.

awesomemrc-20200619


This repo is our research summary and playground for MRC. More features are coming.

baby-2.3


BABYLON is a modular, configurable, hybrid environment for developing expert systems. It provides the following knowledge representation formalisms: frames, rules, logic (Prolog) and constraints. BABYLON is implemented and embedded in Common Lisp.

badger-source-20111217


If you do not have Wordnet 1.6, you should comment out the definition of USE_WORDNET in the toplevel Makefile. There is a loop index bug in the Wordnet 1.6 distribution. Our patch is in the wordnet subdirectory. We recommend that you use it if you build Wordnet 1.6, but, no warranty is expressed or implied on our patch to Wordnet. This bug is documented in ftp://ftp.cogsci.princeton.edu/pub/wordnet/README.bugs.

baleen-20190714


Baleen is an extensible text processing capability that allows entity-related information to be extracted from unstructured and semi-structured data sources. It makes available in a structured format things of interest otherwise stored in formats such as text documents - references to people, organisations, unique identifiers, location information.

baseline4vtkel-20210523


The visual and textual mentions of a *man* shown in the red text and in the red box refer to the same entity, and they should be linked together. The other visual mention i.e. *racket*, *ball* and *logo* should be linked to different entities. These three entities are not known (i.e., they are not part of the initial knowledgebase **K**), and therefore three new entities of type *racket, ball* and *logo* should be added to the knowledge base, i.e., the **A-box** of **K** should be extended with the assertions *Racket(enew1)*, *Ball(enew2)* and *Logo(enew3)*. The visual and textual mentions of *R.Federer* is also referring to the same entity. However, this time the entity is known (i.e., **YAGO** contains an entity for *man*) and therefore the two mentions should be linked to the same entity. For the other textual mentions, i.e., *Lukas Lacko*, *Wimbledon*, *London*, *2018*, we already have instances in the **knowledgebase**, so we have to link them to these entities. (For details read our papers: coming soon!)

bash-master-20190930


This is GNU Bash, version 5.0. Bash is the GNU Project's Bourne Again SHell, a complete implementation of the POSIX shell spec, but also with interactive command line editing, job control on architectures that support it, csh-like features such as history substitution and brace expansion, and a slew of other features. For more information on the features of Bash that are new to this type of shell, see the file `doc/bashref.texi'. There is also a large Unix-style man page. The man page is the definitive description of the shell's features.

bashlex-20170303


bashlex is a Python port of the parser used internally by GNU bash.

bayou-20180721


# Bayou Bayou is a data-driven program synthesis system for Java API idioms that uses the novel technique of Neural Sketch Learning.

bddem-20210120


bddem is a library for manipulating Binary Decision Diagrams in SWI-Prolog (http://www.swi-prolog.org/).

bedsit-20200409


BedSit is a **Bed**rock upon which to build your **Sit**uation driven application. It provides objects and categories that work with either [SitCalc](https://github.com/PaulBrownMagic/SitCalc) or [STRIPState](https://github.com/PaulBrownMagic/STRIPState) allowing you to get on with making your application without having to worry about such details.

behaviac-20200202


- behaviac is a framework of the game AI development, and it also can be used as a rapid game prototype design tool - behaviac supports the behavior tree, finite state machine and hierarchical task network - Behaviors can be designed and debugged in the designer, exported and executed by the game - The designer can only run on the Windows platforms, The run time library is implemented with C++ and C#, and it supports all major platforms (Windows, Linux, Android, iOS, Unity etc.) and Unity. - The C++ version is suitable for the client and server side. - [Website](http://www.behaviac.com/) for documents, tutorials, API,FAQ,source code, downloads,etc. - BehaviacSetup*.exe is the setup package with the binary editor and demo executable. You can download/clone the source code from [github behaviac](https://github.com/Tencent/behaviac)

behavior3js-20200202


This library include the following core structures...

behaviortree.cpp-20200531


This __C++ 14__ library provides a framework to create BehaviorTrees. It was designed to be flexible, easy to use, reactive and fast.

benchmark-generators-20170801


This folder contains the scripts for generating PR tasks from planning domains. For some of them, you'll find programs that generate instances - block-words, bui-campus, confusion-grid, easy-grid-navigation, kitchen. For others, such as logistics or intrusion-detection, we made the instances by hand.

bfg-repo-cleaner-20210315


The BFG is a simpler, faster ([10 - 720x](https://docs.google.com/spreadsheet/ccc?key=0AsR1d5Zpes8HdER3VGU1a3dOcmVHMmtzT2dsS2xNenc) faster) alternative to `git-filter-branch` for cleansing bad data out of your Git repository:

bfws-public-20190511


This project is a joint work by Nir Lipovetzky, and Hector Geffner.

bios-1.1.0


Bios is a suite of syntactico-semantico analyzers that include the most common tools needed for the shallow analysis of English text. Currently the following tools are included: (*) Smart tokenizer that recognizes abbreviations, SGML tags etc. (*) Part-of-speech (POS) tagger. The POS tagger is implemented as a a wrapper around the TNT tagger by Thorsten Brants. (*) Syntactic chunking using the labels promoted by the CoNLL chunking evaluations (http://www.cnts.ua.ac.be/conll2000/chunking). (*) Named-Entity Recognition and Classification (NERC) for the CoNLL entity types plus an additional 11 numerical entity types.

bison-pp-1.21.8


This directory contains the Bison parser generator.

bitlbee-discord-20200907


This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version.

bobtailbot-20210319


This is a simple little chatbot written in Clojure, mostly to have fun and learn about Clojure and also chatbots, AI, you name it. It can either talk through the command-line or connect to an irc server. For the moment, with its default brain, it only accepts simple facts described in SVO sentences with proper names, and simple general rules and queries, as depicted in the example interaction below.

bobtailbot-20210429


This is a simple little chatbot written in Clojure, mostly to have fun and learn about Clojure and also chatbots, AI, you name it. It can either talk through the command-line or connect to an irc server. For the moment, with its default brain, it only accepts simple facts described in SVO sentences with proper names, and simple general rules and queries, as depicted in the example interaction below.

bolinas-20210116


A toolkit for Synchronous Hyperedge Replacement Grammar.

bootcat-0.1.2


Despite certain obvious drawbacks (e.g. lack of control, sampling, documentation etc.), there is no doubt that the WWW is a mine of language data of unprecedented richness and ease of access.

bothack-20201013


A ttyrec of one Medusa run is in the repo: https://github.com/krajj7/BotHack/blob/master/ttyrec/wizmode-exploration-dlvl1-28medusa.ttyrec?raw=true

bow-20020213


@samp{Rainbow} is a standalone program that does document classification. Here are some examples:

bow-20200725


Rainbow is a C program that performs document classification using one of several different methods, including naive Bayes, TFIDF/Rocchio, K-nearest neighbor, Maximum Entropy, Support Vector Machines, Fuhr's Probabilitistic Indexing, and a simple-minded form a shrinkage with naive Bayes.

bt-builder-20200527


This is prototype code for building a behaviour tree from examples of expert behaviour. This code is explained in the accompanying paper [Building Behavior Trees from Observations in Real-Time Strategy Games](https://www.cs.auckland.ac.nz/research/gameai/publications.php).

building-search-applications-20110808


This package contains the source code for the examples shown in the book "Building Search Applications: Lucene, Lingpipe, and Gate". REQUIREMENTS:

bymc-0.9.5


ByMC is a tool for model checking fault-tolerant distributed algorithms. More details to be found at: http://forsyte.at/software/bymc/

caevo-20180827


A TempEval-style system for extracting temporal entities (events and time expressions), and labeling the temporal relations between the temporal entities. More details can be found here:

caevo-20200418


This software is released under the Apache License, Version 2.0. See LICENSE in the project root directory for all details. Portions of this software were originally developed at the United States Naval Academy as NavyTime, and then expanded into CAEVO at the 2013 SCALE Workshop at Johns Hopkins University. Software from Steven Bethard's ClearTK system is also included as separate sieves.

canonicalization-data-v-1.0


This data contains lists of conference and journal names culled from the Web by Rexa. Given a set of strings referring to the same conference or journal, the task is to determine which string should be the canonical one. The canonical string should be free of spelling, segmentation, and OCR errors, and should in some sense be prototypical of the entity.

cape-0.7


This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

cape-20120222


This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

caqe-2


CAQE is a certifying solver for quantified Boolean formulas (QBF) in prenex conjunctive normal form (PNCF). It is based on a recursive counterexample guided abstraction refinement (CEGAR) algorithm.

caqe-qbfeval-2017


This is a binary release of CAQE. It contains statically build binaries for both Linux and Mac. This release is not able to certify, this will be again available later. Check https://www.react.uni-saarland.de/tools/caqe/ for more information.

car-parking-planner-20210526


This assignment considers the Situation Calculus and Planning. It focuses on: - Formalizing a planning problem, using Situation Calculus to represent the world. - Implementing the model and verifying its correctness using a planner based on the Golog syntax. - Extending the model as well as its implementation in order to deal with additional aspects of the environment.

carneades-4-master-20160504


This source code is subject to the terms of the Mozilla Public License, version 2.0 (MPL-2.0). If a copy of the MPL was not distributed with this software, it is also available online at . For futher information about the MPL see .

cat-20210313


This is the repository for the ACL 2020 paper [Embarrassingly Simple Unsupervised Aspect Extraction](https://www.aclweb.org/anthology/2020.acl-main.290/). In this work, we extract aspects from restaurant reviews with attention that uses RBF kernels.

catmud-20180216


CatMUD is a MUD server (and MUD game) written in Prolog. It is not designed to be robust, nor widely used, so it's probably not going to stand up to a regular MUD environment.

ccalc-2.0


A single installation of CCalc may be shared by multiple operating systems on a network by including for each OS a subdirectory of the 'solvers' directory containing solvers compiled for that OS. CCalc will call 'uname' to determine the OS in use and use the appropriate set of solvers.

cel-20201029


CEL is a lightweight Description Logic reasoner for large-scale biomedical ontologies. The CEL Plug-ing uses the [OWL API](https://owlcs.github.io/owlapi/) and lets CEL be used as a plug-in for [Protege](https://protege.stanford.edu/).

chalk-20200616


A [Prolog-ish][Prolog] interpreter written in Rust, intended perhaps for use in the compiler, but also for experimentation.

char-rnn-master-20160416


This code implements **multi-layer Recurrent Neural Network** (RNN, LSTM, and GRU) for training/sampling from character-level language models. In other words the model takes one text file as input and trains a Recurrent Neural Network that learns to predict the next character in a sequence. The RNN can then be used to genrate text character by character that will look like the original training data. The context of this code base is described in detail in my [blog post](http://karpathy.github.io/2015/05/21/rnn-effectiveness/).

chatscript-20200627


became available, and is a better name anyway.

cicero-20200419


Cicero is an Open Source implementation of the [Accord Project Template Specification][apspec]. It defines the structure of natural language templates, bound to a data model, that can be executed using request/response JSON messages.

citeseerx-20200730


This is the source code for the [CiteSeerX academic digital library.](http://citeseerx.ist.psu.edu)

cl-97


This package requires Hdrug 3.099 / SICStus 3 #3. It's ready to be plugged in, just change the paths in the scripts sh and shg. Hdrug is available at http://www.let.rug.nl/~vannoord/Hdrug/. Please contact me in case of problems: vannoord@let.rug.nl.

cl-prolog2


This is a realization of Marc Kuo's ["modelling approach to OR (operations research)"](https://kuomarc.wordpress.com/2012/03/05/the-uncommon-lisp-approach-to-operations-research/) for Prolog language.

clg-plus-20181025


Where hidden.pddl is a file with the hidden observations. Check the example that should be attached with this distribution.

clg-run-20170927


NB: -q is an optional parameter for on-line mode. When performing a set of tests it is good to change it from different values between 0 and 100. It indicates the probability to take the first option in a (boolean) sensing action.

clingo-4.5.4


Gringo is a grounder that, given an input program with first-order variables, computes an equivalent ground (variable-free) program. Its output can be processed further with answer set solvers like clasp, cmodels, or smodels.

clocc-02-05-07


It contains Lisp code for various applications which is * Common Lisp, i.e. runs in ANSI CL implementations, * Free Software, according to the Debian Free Software Guidelines (e.g. licensed under GPL, LGPL, MIT or BSD licenses, or public domain), * Portable, i.e. should be portable among CL implementations with low effort, and does not require modifications to the CL implementation itself, * Self-contained, i.e. does not require packages not in this repository, * Ready to use, i.e. runs out of the box in the Free CL implementations.

clprover-1


CLProver is a resolution-based theorem-prover based on the method described in he paper "A resolution-based calculus for Coalition Logic" (Nalon, C., Zhang, L., Dixon, C., and Hudstadt, U., Journal of Logic and Computation, 2014). It was implemented in SWI Prolog and the binary, compiled for linux x86_64, is available at http://www.cic.unb.br/docentes/nalon/software/clprover-v1.tar.gz.

clproverpp-1.0.3


a modality [list] where list is a (possibly empty) list of agents (positive

clyc-20200101


This native Common Lisp version will be refactored, documented, and modernized yielding a much smaller and easier to modify system. It should also run inferences faster than the layered and semi-interpreted Java version, which emulates a Lisp-like environment (SubL/CycL).

cms-20020304


http://www-ksl-svc.stanford.edu:5915/doc/release/index.html

coauthor-20200801


**Coauthor** is a tool for group collaboration, discussion, keeping track of notes/results of meetings, etc., in particular to enable **[supercollaboration](http://erikdemaine.org/supercollaboration/)**. Coauthor's primary goal is to ease multiauthor collaboration on unsolved problems in theoretical computer science, so e.g. you'll find LaTeX math support, but it has proved useful in other fields too.

codesh-0.9.0


CODESH - COllaborative DEvelopment SHell is an intelligent shell, which automatically logs a user's command line (shell) session: commands, scripts executed, output produced, changes to environment variables, alias creation and other information needed to recreate the session later. This session is uniquely tagged and stored in local or distributed backend repositories and can be extracted and reproduced at any time by the user who created the session or by collaborators located anywhere in the world.

colin2-20200119


This package contains COLIN, a planner for domains with continuous numeric and/or duration dependent effects. For more details, see the papers:

colin2-trh-20200119


This package contains COLIN-TRH, a planner for domains with time windows. For more details, see the papers:

colis-language-20191117


The oracle file is a Yaml-serialised file of the following format:

collins-parser-20080216


This code is the statistical natural language parser described in

collins-parser-20080503


This code is the statistical natural language parser described in

colore-20210105


Many tasks require correct and meaningful communication and integration among intelligent agents and information resources. A major barrier to such interoperability is semantic heterogeneity: different applications, databases, and agents may ascribe disparate meanings to the same terms or use distinct terms to convey the same meaning. Even when software applications use the same terminology, they often associate different semantics with the terms. This clash over the meaning of the terms prevents the seamless exchange of information among the applications. The development and application of ontologies play a central role in achieving semantic integration. An ontology is a computer-interpretable specification that is used by an agent, application, or other information resource to declare what terms it uses, and what the terms mean. Ontologies support the semantic integration of software systems through a shared understanding of the terminology in their respective ontologies.

comsem-20200729


The repository contains scripts and data used in the [Computational Semantics](https://www.rug.nl/ocasys/rug/vak/show?code=LIX021M05) course at the University of Groningen.

conceptgraph-20200907


Answer Graph Criterias to check for: 1. w is a well formed CG 2. w is true if the data base is correct 3. The entire query graph q is covered by a join from w 4. For every concept in q that has a value, the corresponding concept in w has the same value. 5. For every concept in q that had a question mark, the corresponding concept in w has a value.

concerto-20200419


Concerto is a lightweight 100% JavaScript schema language and runtime. It works in both a Node.js process and in your browser. The browserified version of Concerto is ±280KB. We are working on making it even smaller.

conformant-aij-20160811


CONTENT: This package contains the executable version of DNF and bechnmarks used for the paper submitted to AIJ December, 2012.

contingent-plan-executor-20210130


This repository contains the the logic of dialog planner. It is deployed as a bluemix python application with a NoSQL db database that is supposed to store solutiions generated by planner.

copernic-20200229


copernic is web application that is (mostly) implemented with Python programming language. It is supported by a database that is a triple store versioned. It is possible to do time traveling queries at any point in history while still being efficient to query and modify the latest version. The versioned triple store is implemented using a novel approach dubbed generic tuple store. copernic goal is to demonstrate that versioned databases allow to implement workflows that ease cooperation.

coq-20210417


Coq is a formal proof management system. It provides a formal language to write mathematical definitions, executable algorithms and theorems together with an environment for semi-interactive development of machine-checked proofs.

cortex-0.1


This contains the CoRTex (Co-Reference at Texas) code, which was created by Pascal Denis as part of his PhD dissertation and implements coreference resolution using ranking and global ILP (integer linear programming) based constraints.

cotd-20190617


City of the Damned is a simple fast-paced coffee-break roguelike inspired by a 7DRL entry "City of the Condemned" by Tapio (http://www.roguebasin.com/index.php?title=City_of_the_Condemned).

cotd-linux-x64-v-1.3.4


This is a simple fast-paced coffee-break roguelike inspired by a 7DRL entry "City of the Condemned" by Tapio (http://www.roguebasin.com/index.php?title=City_of_the_Condemned).

cougaar-10.2.1


Cougaar is a Java-based architecture for the construction of large-scale distributed agent-based applications. It is a product of two consecutive. The second program is developing information technologies to enhance the survivability of these distributed agent-based systems operating in extremely chaotic environments.

cpm-20180519


Description: This program is a ncurses based console tool to manage passwords and store them public key encrypted in a file - even for more than one person. The encryption is handled via GnuPG so the programs data can be accessed via gpg as well, in case you want to have a look inside. The data is stored as as zlib compressed XML so it's even possible to reuse the data for some other purpose.

cpm-20201123


Description: This program is a ncurses based console tool to manage passwords and store them public key encrypted in a file - even for more than one person. The encryption is handled via GnuPG so the programs data can be accessed via gpg as well, in case you want to have a look inside. The data is stored as as zlib compressed XML so it's even possible to reuse the data for some other purpose.

cpp-2.0


1. Basic desires: basic_desire(+Name,+Formula). where Name is the name of the basic desire and Formula is the temporal formula for the basic desire. Formula can be either one of the following forms. + L + occ(A) + goal(F) + and(Fs) + or(Fs) + neg(F) + eventually(F) + next(F) + always(F) + until(F1,F2) where L is a literal, A is an action, F (possibly with indices) is a basic desire formula, and Fs is a list of basic desire formulae.

crfae-dep-parser-20200725


This repository contains the code to reproduce the experiment result of the paper [CRF autoencoder for unsupervised dependency parsing](http://sist.shanghaitech.edu.cn/faculty/tukw/emnlp17CJT.pdf) on WSJ data set and PASCAL dataset.

cryptogram-20201128


This is a small program to help you solve cryptograms.

csk-20210506


QUASIMODO is a system to extract commonsense knowledge from query logs and QA forums.

ctcdecoder-20181121


The RNN output matrix of the **Mini example** testcase contains 2 time-steps (t0 and t1) and 3 labels (a, b and - representing the CTC-blank). Best path decoding (see left figure) takes the most probable label per time-step which gives the path "--" and therefore the recognized text "" with probability 0.6\*0.6=0.36. Beam search, prefix search and token passing calculate the probability of labelings. For the labeling "a" these algorithms sum over the paths "-a", "a-" and "aa" (see right figure) with probability 0.6\*0.4+0.4\*0.6+0.4*0.4=0.64. The only path which gives "" still has probability 0.36, therefore "a" is the result returned by beam search, prefix search and token passing.

cudd-2.5.0


This directory contains a set of packages that allow you to build a toy application based on the CUDD package.

cvc4-20210314


CVC4 is a tool for determining the satisfiability of a first order formula modulo a first order theory (or a combination of such theories). It is the fourth in the Cooperating Validity Checker family of tools (CVC, CVC Lite, CVC3) but does not directly incorporate code from any previous version.

cxboard-0.14


Release issues: 1. Opening Book-Tsito book must be disable and blank out to use CXBoard book otherwise you get piece disappear or illegal move. (CXBoard will create ccbook.dat in your home dir when you add move to book. We planned to compile ccbook and release on future date. Look for it on CXBoard Home page)

cyc-api-bundle-1.0.0


This package contains a suite of Java APIs for updating and querying the Cyc Knowledge Base. In the 1.0.0-Preview release, we offer the following APIs:

cyc-jrtl-with-commonlisp-20190106


Most Worked-on Feature set * Full compatibility with LarKC Platform http://larkc.eu * This library is a drop-in replacement to the subl.jar of OpenCyc

cyc-jrtl-with-commonlisp-20190112


Most Worked-on Feature set * Full compatibility with LarKC Platform http://larkc.eu * This library is a drop-in replacement to the subl.jar of OpenCyc

cyc-jrtl-with-commonlisp-20190124


Most Worked-on Feature set * Full compatibility with LarKC Platform http://larkc.eu * This library is a drop-in replacement to the subl.jar of OpenCyc

cyc-jrtl-with-commonlisp-20190425


Most Worked-on Feature set * Full compatibility with LarKC Platform http://larkc.eu * This library is a drop-in replacement to the subl.jar of OpenCyc

cyc-jrtl-with-commonlisp-20190506


Most Worked-on Feature set * Full compatibility with LarKC Platform http://larkc.eu * This library is a drop-in replacement to the subl.jar of OpenCyc

cyc-jrtl-with-commonlisp-20190614


OVERVIEW LarKC is a platform for massive distributed incomplete reasoning that will remove the scalability barriers of currently existing reasoning systems for the Semantic Web.

cyc-jrtl-with-commonlisp-20200118


Most Worked-on Feature set * Full compatibility with LarKC Platform http://larkc.eu * This library is a drop-in replacement to the subl.jar of OpenCyc

cycic-transformers-20200603


This repository demonstrates how to train and test on the CycIC dataset using the popular transformers library from huggingface. The original example scripts can be found at [transformers/examples/multiple-choice/](https://github.com/huggingface/transformers/tree/master/examples/multiple-choice). Here, they have been extended with an additional data processing class for the CycIC task.

dali-14.08a


DALI is a meta interpreter built on top of Sicstus Prolog (R) (at the moment).

dali-20190517


DALI is a meta interpreter built on top of Sicstus Prolog (R) (at the moment).

dali-4


Directory bin\ contains support files including the SICStus development system (spwin.exe and sicstus.exe) and various tools.

darknet-20180508


# Darknet # Darknet is an open source neural network framework written in C and CUDA. It is fast, easy to install, and supports CPU and GPU computation.

datalog-2.3


This package contains a lightweight deductive database system. Queries and database updates are expressed using Datalog--a declarative logic language in which each formula is a function-free Horn clause, and every variable in the goal of a clause must appear in the body of the clause. The use of Datalog syntax and an implementation based on tabling intermediate results, ensures that all queries terminate.

datalog-2.5


This package contains a lightweight deductive database system. Queries and database updates are expressed using Datalog--a declarative logic language in which each formula is a function-free Horn clause, and every variable in the goal of a clause must appear in the body of the clause. The use of Datalog syntax and an implementation based on tabling intermediate results, ensures that all queries terminate.

daydreamer-20171020


DAYDREAMER is a trademark of Erik T. Mueller.

defminer-20200622


This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

defteval-20201015


This work was developed as final project for AI Course Fall 2019/2020 offering at AlexU Faculty of Engineering. It is our offical contribution for [Deft Eval Competition Subtask 1](https://competitions.codalab.org/competitions/22759) and running on it's offical [dataset](https://github.com/adobe-research/deft_corpus). It was an amazing experience and a great oppurtinuity to learn and explore the NLP world ! We would like to thank you the organziers of the compeition for their great work and for their willingness to help hrough forum.

deid-1.1


This software de-identifies protected health information (PHI) from

deidentify-20170611


> *deidentify* is a tool to remove personal identifiers from free-text medical record data. Detected identifiers are replaced by randomly generated substitutes. Consistency of the data is preserved as the same name, phone number or location will always be mapped to the same replacement.

demiurge-1.1.0


This file contains important information about this distribution of the tool Demiurge.

democratix-0.2


2) Save the encoding in folder "enc/" with the extension ".lp". Here, specifies the sub-folder that contains the encodings that are compatible with the ASP solver your encoding was designed for.

dendrite-20200225


This was inspired by the opencyc bot @aindalis and have set up in #logicmoo on freenode. There is an interesting synergy of the zulip group chat UX that I think could play well with a knowledge-base-repl type gizmo.

depdep-20200514


Depdep is a merciless sentinel which will seek sensitive files containing critical info leaking through your network. Basically, it is a fast and practical sensitive data search tool maintaining personal & commercial data privacy for companies and institutions. It can very well be used by auditors making sure that their network doesn't leak any unauthorized non-compliant data through windows & unix/linux shares. The usage is easy and configurable, however, certain technical knowledge is necessary, such as using linux console, ability of writing and understanding basic regular expressions, tough the configuration file comes with several sensitive information patterns, etc.

derplanner-20200209


### Fact Database Fact database is a collection of typed tuples, representing domain knowledge about the world.

des-swi-4.2


This allows the system to consult the needed files at startup.

detoxify-20210114


A complete list of all the identity labels available can be found [here](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data).

df-43.05


This software is still in development, and this means that there are going to be problems, including serious problems that, however unlikely, might damage your system or the information stored on it. Please be aware of this before playing.

df-44.12


This software is still in development, and this means that there are going to be problems, including serious problems that, however unlikely, might damage your system or the information stored on it. Please be aware of this before playing.

df-44.12-linux32


This software is still in development, and this means that there are going to be problems, including serious problems that, however unlikely, might damage your system or the information stored on it. Please be aware of this before playing.

dfhack-20200119


DFHack is a Dwarf Fortress memory access library, distributed with scripts and plugins implementing a wide variety of useful functions and tools.

dflat-debugger-0.15


This package contains the D-FLAT Debugger in version 0.15,

dialog-games-20210329


This repository contains implementations of dialog games for abstract argumentation frameworks and for two extensions that I developed during my PhD, namely *abductive* argumentation frameworks and *property-based* argumentation frameworks.

dialogs2-20160508


schemas.pl: contains the currently available schemas of the system

dig-1.1


This distribution contains:

dig-etl-engine-20200731


myDIG is a tool to build pipelines that crawl the web, extract information, build a knowledge graph (KG) from the extractions and provide an easy to user interface to query the KG. The project web page is [DIG](http://usc-isi-i2.github.io/dig/).

disambiguate-20200625


This repository contains a set of easy-to-use tools for training, evaluating and using neural WSD models.

discourse-parser-dist-20150805


The Discourse Parser is an Open Source Software, and is released under the Common Public License. You are welcome to use the code under the terms of the licence for research purposes ONLY, however please acknowledge its use with a citation:

dl4ir-webnav-20210117


WebNav is a benchmark task for evaluating an agent with abilities to understand natural language and plan on partially observed environments. In this challenging task, an agent navigates through a web site consisting of web pages and hyperlinks to find a web page in which a query appears.

dmoz-urlclassifier-20180805


DMOZ is the largest, most comprehensive human-edited directory of the Web. It was historically known as the Open Directory Project (ODP). It contains a categorized list of Web URLs. Their listings are updated on a monthly bases and published in [RDF files](http://rdf.dmoz.org/rdf/).

dnf-contingent-20160811


DNFct_run: this directory contains the input theory translator in prolog and DNF exec file. PDDL benchmarks are run in this directory using the following command:

dnrdalmas-20110816


OVERVIEW This directory contains three subdirectories: * dnrDALMAS: Prolog files for the general-level implementation of the DALMAS architecture. * colourandform: Prolog files for the implementation of the COLOUR&FORM system. * wastecollectors: Prolog files for the implementation of the WASTE-COLLECTORS system.

dnrdalmas-20130206


OVERVIEW This directory contains three subdirectories: * dnrDALMAS: Prolog files for the general-level implementation of the DALMAS architecture. * colourandform: Prolog files for the implementation of the COLOUR&FORM system. * wastecollectors: Prolog files for the implementation of the WASTE-COLLECTORS system.

docker-grocy-20200321


# project information project_name: grocy project_url: "https://github.com/grocy/grocy" project_logo: "https://grocy.info/img/grocy_logo.svg" project_blurb: | [{{ project_name|capitalize }}]({{ project_url }}) is an ERP system for your kitchen! Cut down on food waste, and manage your chores with this brilliant utulity.

doctest-tools-1.0a3


This is the README file for the doctest-tools package.

dpb-20210307


The book is available in German now. It is written in NoWeb and contains

dplp-20201113


1. Run the Stanford CoreNLP with the given bash script **corenlp.sh** with the command "*./corenlp.sh path_to_dplp/data*" - This is a little awkward, as I am not sure how to call the Stanford parser from any other directory.

dprolog-20180803


An extension of prolog that allows rules to be labelled with a belief (a real number between 0 and 1 inclusive) and given a label so that proofs can be generated with a belief attached to them and rules can argued about.

dprolog-master-20160429


An extension of prolog that allows rules to be labelled with a belief (a real number between 0 and 1 inclusive) and given a label so that proofs can be generated with a belief attached to them and rules can argued about.

drakon-editor-1.31


drakon_gen.tcl

drakon_gen.tcl is a command-line utility that generates code from a .drn file.

Usage example:

./drakon_gen.tcl -in examples/Python/python_demo.drn

This one will generate a file called python_demo.py in examples/Python.

./drakon_gen.tcl -in examples/Python/python_demo.drn -out .

This one will generate a file called python_demo.py and put it in the current folder.

In order for code generation to work, the .drn files must have a programming language selected in its properties.

To choose the language for the .drn file, open it in DRAKON Editor, go to File / File properties...

dunyazad-20190304


A story generation system (with choices(!)).

dunyazad-20190703


A story generation system (with choices(!)).

duprkit-20190601


**Everything on the master branch is broken due to the ongoing redesign. And unluckily the latest release is outdated. Please look forward to the next major release.**

dwarf-fortress-20200203


ABOUT ````` Dwarf Fortress is a single-player fantasy game. You can control a dwarven outpost or an adventurer in a randomly generated, persistent world.

dwarf-fortress-34.11


This software is still in development, and this means that there are going to be problems, including serious problems that, however unlikely, might damage your system or the information stored on it. Please be aware of this before playing.

easdrl-20210124


### POS data 1. ``{domain}_dependency.pkl`` contains the part-of-speech data for action name extractor 2. ``{domain}_arg_pos.pkl`` contains the part-of-speech data for action argument extractor

easyccg-0.2


EasyCCG is a CCG parser created by Mike Lewis.

easysrl-20200729


A pretrained model is available [here](https://drive.google.com/file/d/0B7AY6PGZ8lc-R1E3aTA5WG54bWM/view?usp=sharing).

eat-20080312


This 1600 bpi UNIX tar format tape contains the following files in addition to this one:

ec-20210502


DreamCoder is a wake-sleep algorithm that finds programs to solve a given set of tasks in a particular domain.

eclipse-basic-20160110


Support: tcltk.tgz A matching Tcl/Tk release (8.5) (you may have that already). Needed for the tkeclipse development GUI. editors_eclipse_support.tgz Support for various editors for editing ECLiPSe code.

edits-1.0


training_set is a file/directory of already annotated RTE corpus model_path is the file in which the model is saved.

eisbot-20200202


EISBot is a [StarCraft: Brood War](http://us.blizzard.com/en-us/games/sc/) bot developed by Ben Weber at [UC Santa Cruz](http://games.soe.ucsc.edu/) as part of his dissertation research. The main objective for the project is to identify the capabilities necessary for expert Starcraft gameplay and to realize these capabilities in a game-playing agent.

elsa-20180902


Elsa is a tool that analyses your code without loading or running it. It can track types and provide helpful hints when things don't match up before you even try to run the code.

emacs-24.4


This directory tree holds version 24.4 of GNU Emacs, the extensible, customizable, self-documenting real-time display editor.

emacs-25.1


This directory tree holds version 25.1 of GNU Emacs, the extensible, customizable, self-documenting real-time display editor.

emacs-cl-20210524


Emacs Common Lisp is an implementation of Common Lisp, written in Emacs Lisp. It does not yet purport to conform to the ANSI standard since, among other things, CLOS, and pretty printing are missing. However, most other Common Lisp features like lexical closures, packages, readtables, multiple values, bignums, adjustable arrays, etc, are present. At this stage many bugs remain and error checking is sparse.

emacs-mark-tools-20190728


A simple library for navigating the global and local mark rings in Emacs. Simply execute M-x list-marks for a navigable list of the global-mark-list. The prefix argument can be used to limit the list to the buffer's local mark list.

emacs-refactor-20190511


Emacs Refactor (EMR) is a framework for providing language-specific refactoring in Emacs. It includes refactoring commands for a variety of languages, including elisp itself!

emacs-shroud-20200129


;; -*- mode:org -*- * Emacs-Shroud Interface :PROPERTIES: :ALT_TITLE: Introduction :DESCRIPTION: Shroud secrets manager :END: Shroud is a password manager written in Guile which uses GnuPG in the backend. See Shroud's website at [[https://dthompson.us/projects/shroud.html][this link.]] This package is an Emacs interface to Shroud using the Buffers User Interface library.

emacs-wiki-2.72


This is the README file for emacs-wiki.

emma-src-20110821


1. Unpack the Quip distribution into the ptime release folder

emofilt-095


Hello This is an emofilt distribution. You should get further information at http://emofilt.sourceforge.net/ The newest version of emofilt should always be available there via the cvs-repository.

emovoice-bin-20141126


EmoVoice is an emotional speech recognizer implemented in the SSI framework. It comes with a pipeline (emovoice.pipeline) and an example model (emovoice.trainer). The user is encouraged to train a personalized model using the training GUI (modelui.exe).

encodings-dbai-tr-2017.107


In this directory, example D-FLAT encodings and related tools for various problems can be found. Some naming conventions: - dynamic.lp is a D-FLAT encoding. - monolithic.lp is a monolithic ASP program that can serve as comparison and is not used by D-FLAT.

english-resource-grammar-20190313


This directory provides a pre-release snapshot of the forthcoming 1214 version of the ERG, which is a ‘patch’ release addressing minor deficiencies in 1212. The core of the 1214 release has practically been frozen since late 2014, and this pre-release version has been in use already. Since, we have slowly and lovingly improved interface aspects, notably the Semantic Interface (SEM-I), generation, and final sets of gold-standard treebanks. As of May 3 2016, all treebanks are in near-perfect condition, the SEM-I is stable, and there are at most minor pending revisions to generator trigger rules. Fially, the release ‘collateral’ (this file and ‘etc/redwoods.xls’) remain to be updated. The official release of this version of the ERG is planned for mid-May 2016.

enhsp-public-20210130


This repository contains ENHSP, which stands for Expressive Numeric Heuristic Planner. It is a forward heuristic search planner, but it is expressive in that it can handle:

enju-2.4.2


Enju is a syntactic analyzer for English. A grammar is based on Head-driven Phrase Structure Grammar (HPSG), which is a linguistic theory for syntax. Since this system computes more detailed structure of sentences than CFG parsers, you can obtain various information such as predicate-argument structures.

entailment-with-tensorflow-20190314


This repo hosts the code associated with my O'Reilly article, "Textual entailment with TensorFlow: Using neural networks to explore natural language," published on July 17, 2017.

ephyraquestionanalysis-20170320


A collection of [OpenEphyra](http://sourceforge.net/projects/openephyra/) components necessary for question analysis. **Dependencies**: Java, Maven, WordNet. **You may need to set the right locale**, see [build.sh](build.sh). Unlike initial versions relying on LTI repositories, this is a self-sufficient one.

epilog-20050622


This distribution contains the following directories:

epk-20170808


Single-Agent Planner is a complete epistemic planner without the epistemic closed world assumption for single agent which is logic-based.

erg-20140204


Stable tagged release with full (manual) updates of all gold profiles including LOGON, WeScience, and (after a long hiatus) the Verbmobil and ecommerce treebanks, along with the newly added SemCor (semantically tagged portion of the Brown corpus - the first 3100 items so far). Details on current ERG coverage of these profiles can be found on the Redwoods web page: http://www.delph-in.net/redwoods.

erg-2018


Stable tagged release with full (manual) updates of all gold profiles including LOGON, WeScience, and (after a long hiatus) the Verbmobil and ecommerce treebanks, along with the newly added SemCor (semantically tagged portion of the Brown corpus - the first 3100 items so far). Details on current ERG coverage of these profiles can be found on the Redwoods web page: http://www.delph-in.net/redwoods.

ergo-20200419


This is the source code for the Ergo compiler. Ergo is the [Accord Project][apmain] language for Smart Legal Contracts.

etalis-1.1


This is the public release of the complex event processing system ETALIS ( http://code.google.com/p/etalis ).

etymwn-20130208


The Etymological Wordnet project provides information about how words in different languages are etymologically related. The information is mostly mined from the English version of Wiktionary, but also contains a number of manual additions.

event-process-typing-20201010


# Semantic Typing of Event Processes This is the repository for the resources in CoNLL 2020 Paper "What Are You Trying Todo? Semantic Typing of Event Processes". This repository contains the source code and links to some datasets used in our paper.

excitement-open-platform-20160618


This repository contains both the code and the documentation (i.e. wiki pages) of the next Excitement Open Platform (EOP) release, which is an open source software platform containing state-of-the-art algorithms for recognizing texual entailment relations: _given two text fragments, one named text and the other named hypothesis, the task consists in recognizing whether the hypothesis can be inferred from the text_

explainshell-20170303


explainshell is a tool (with a web interface) capable of parsing man pages, extracting options and explain a given command-line by matching each argument to the relevant help text in the man page.

factual-statement-extractor-20100615


This is a software package for extracting simplified factual statements from complex sentences It was designed for the automatic factual question generation but may be useful for other natural language processing and generation problems (e.g., summarization).

factualstatementextractor-20100626


This is a software package for extracting simplified factual statements from complex sentences It was designed for the automatic factual question generation but may be useful for other natural language processing and generation problems (e.g., summarization).

fastmoe-20210603


An easy-to-use and efficient system to support the Mixture of Experts (MoE) model for PyTorch.

fibo-20210706


FIBO is a trademark of EDM Council, Inc. It is also standardized by the [Object Management Group](https://www.omg.org/index.htm).

flex-pp-2.3.8


This is release 2.3 of flex - a full release.

flip-0.7


This directory contains the source code of the FLIP system, an implementation of an IFLP (Inductive Functional Logic Programming) framework, plus examples and documentation.

flux-3.1


FLUX is a high-level programming system for cognitive agents of all kinds, including autonomous robots. Cognitive agents control themselves using an internal model of their environment. The FLUX kernel system endows agents with the general cognitive ability to reason about their actions and sensor data they acquire. FLUX agents are also able to plan ahead their actions in order to achieve specific goals. FLUX allows to implement complex strategies with concise and modular agent programs. An efficient constraint logic program, the FLUX system scales up well to domains which require large states and long action sequences.

fluxgui-20190524


The f.lux indicator applet `fluxgui` is an indicator applet to control `xflux`, an application that makes the color of your computer's display adapt to the time of day: warm at night, and like sunlight during the day. Reducing blue light exposure in the evening can help you fall asleep at night. See https://justgetflux.com/research.html for more details.

fluxplayer-prolog-engine-20180611


This is going to take some work to write an executable that the gdl-perf framework can invoke.

food-recipe-cnn-20210511


Maturaarbeit 2018: This work makes usage of deep convolutional neural networks with Keras to classify images into 230 food categories and to output a matching recipe. The dataset contains >400'000 food images and >300'000 recipes from chefkoch.de.

foodkg.github.io-20210416


This dataset includes mappings to some of the concepts found in: - DBpedia - schema.org - FoodOn - Units Ontology - ChEBI

fortuna-0.2


The directory Release contains an executable that runs the case studies from the paper/technical report. The executable is compiled on a standard PC with Ubuntu Linux using The GCC C++ Compiler 4.3.3. To run the executable see/run the script Release/fortuna.sh

fossology-0.6.0


About ===== FOSSology is a framework for software analysis, both source and binary. It uses a repository for unpacking and storing the uploads, "agents" to analyze the uploaded files, and a Postgres database to store and display the results. Also included is a license agent for scanning source code for potential license texts.

fossology-0.8.0


About ===== FOSSology is a framework for software analysis, both source and binary. It uses a repository for unpacking and storing the uploads, "agents" to analyze the uploaded files, and a Postgres database to store and display the results. Also included is a license agent for scanning source code for potential license texts.

fossology-0.9.0


About ===== FOSSology is a framework for software analysis, both source and binary. It uses a repository for unpacking and storing the uploads, "agents" to analyze the uploaded files, and a Postgres database to store and display the results. Also included is a license agent for scanning source code for potential license texts.

fowl-0.41


This is the README file for F-OWL v0.41 CVS version: $Revision: 1.1 $, $Date: 2003/09/25 03:49:23 $ =========================================

fpm-20200511


* If fpm is not helping you make packages easily, then there is a bug in fpm. * If you are having a bad time with fpm, then there is a bug in fpm. * If the documentation is confusing, then this is a bug in fpm.

fpos-20180902


A CSV transaction export from any of the following banks can be processed by `fpos`

frdcsa-panoply-git-20200329


The FRDCSA (https://frdcsa.org) has been under development for 20 years as of writing ([2020-03-29,02:53:26]). It is a comprehensive free/libre artificial intelligence system. Mainly it collects other A.I. systems and gets them all to talk to each other. However, it has quite a lot of original code as well, maybe over 2 million lines of code. The most important individual project is the Free Life Planner (https://github.com/aindilis/free-life-planner).

free-cite-20110815


Rails is a web-application and persistence framework that includes everything needed to create database-backed web-applications according to the Model-View-Control pattern of separation. This pattern splits the view (also called the presentation) into "dumb" templates that are primarily responsible for inserting pre-built data in between HTML tags. The model contains the "smart" domain objects (such as Account, Product, Person, Post) that holds all the business logic and knows how to persist themselves to a database. The controller handles the incoming requests (such as Save New Account, Update Product, Show Post) by manipulating the model and directing data to the view.

free-kmgen-20190303


An account for a PostgreSQL database server is needed.

freebase-tools-1.0.0


FreebaseTools is a small toolkit to pre-process, filter, index and store Google's Freebase knowledge base in a fast and relatively "small" Lucene index. KB Variants such as BaseKB Gold which is used as the reference KB for TAC-KBP can also be handled.

freebasic-1.07.1


FreeBASIC gives you the FreeBASIC compiler program (fbc or fbc.exe), plus the tools and libraries used by it. fbc is a command line program that takes FreeBASIC source code files (*.bas) and compiles them into executables.

frozen-bubble-20190309


This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License version 2, as published by the Free Software Foundation.

fsaplanner-20180605


This is an implementation in SWI Prolog of a planner that generates loops. See "Planning with Loops" in IJCAI-05 by Hector Levesque for background, and the HISTORY file for a rough idea of the new finite-state-automaton representation of plans.

fuel-20210314


FUEL is a succinct Scala framework for implementing metaheuristic algorithms, in particular evolutionary algorithms. It originated in my work on the book "Behavioral Program Synthesis with Genetic Programming" (Springer 2016, , )

funbot-koha-20080812


Funbot is a Perl IRC bot designed to sit on a server and perform various tasks suited to IRC bots. This can include joining and parting channels, keeping track of channel ops/voices/bans, and various miscellaneous tasks. Anyone who frequents IRC has probably seen Eggdrops, mIRC ``bots'', fserves, or whatever services the IRC network provides.

fuse-taglayer-20210526


A read-only tag-filesystem overlay for hierarchical filesystems

galvanise-v2


There is a small interpreter in the statemachine to do the propagation, which has inlined code depending on the number of outputs to be triggered. The ordering of basic blocks generated by the compiler are forced in way that follow the common code path (about 90% of the time, ie when there are no triggers). Ultimately, the implementation has quite a large overlap with Sancho's propnet statemachine, which since they documented in detail and seems to be the fastest way to propagate (at this point in time) - it made it very hard to do anything else. Nevertheless, I experimented a bit with some hybrid propnet/state machines and still think if given more meta-timing games such as speed chess could get an order of magnitude faster via splitting the network up some more, and generating code to replace some of the propnet.

gambit-16.0.1


This is the README file for Gambit, software tools for game theory.

game-20210120


An hack-and-slash style mult-player dungeon crawl blending the heuristics of NetHack with a combat engine inspired by Minnesota Dungeon (Minneapolis Dungeon, Larry's Maze, et. al.).

gamer-2.0


The directory 'JavaBDD' contains the sources taken from the sourceforge-project (slightly extended to enable CUDD to store BDDs on disk). The original version can be found in the web at 'http://javabdd.sourceforge.net/'. The most recent version, 2.0, is in the subversion-repository, from where we also got the jdd.jar package.

gamification-engine-20210112


The Gamification-Engine (gengine) is an open source software (MIT) for integrating any kinds of gamification features into your product.

gappa-0.18.0


Gappa (Génération Automatique de Preuves de Propriétés Arithmétiques -- automatic proof generation of arithmetic properties) is a tool intended to help verifying and formally proving properties on numerical programs dealing with floating-point or fixed-point arithmetic.

gappa-1.0.0


Gappa (Génération Automatique de Preuves de Propriétés Arithmétiques -- automatic proof generation of arithmetic properties) is a tool intended to help verifying and formally proving properties on numerical programs dealing with floating-point or fixed-point arithmetic.

gappa-1.1.2


Gappa (Génération Automatique de Preuves de Propriétés Arithmétiques -- automatic proof generation of arithmetic properties) is a tool intended to help verifying and formally proving properties on numerical programs dealing with floating-point or fixed-point arithmetic.

gappalib-coq-1.0.0


This support library provides vernacular files so that the certificates Gappa generates can be imported by the Coq proof assistant. It also provides a "gappa" tactic that calls Gappa on the current Coq goal.

gappalib-coq-1.1.0


This support library provides vernacular files so that the certificates Gappa generates can be imported by the Coq proof assistant. It also provides a "gappa" tactic that calls Gappa on the current Coq goal.

gappalib-coq-1.2.1


This support library provides vernacular files so that the certificates Gappa generates can be imported by the Coq proof assistant. It also provides a "gappa" tactic that calls Gappa on the current Coq goal.

gappalib-coq-1.3.4


This support library provides vernacular files so that the certificates Gappa generates can be imported by the Coq proof assistant. It also provides a "gappa" tactic that calls Gappa on the current Coq goal.

gappalib-coq-1.4.0


This support library provides vernacular files so that the certificates Gappa generates can be imported by the Coq proof assistant. It also provides a "gappa" tactic that calls Gappa on the current Coq goal.

gate-2.1


GATE is a tool for: * scientists performing experiments that involve processing human language. GATE is funded by the EPSRC and the EU.

gateway-20190617


Gateway is a movement and a project to create a service for cooperative storywriting and textual roleplaying that is free software and belongs to the community.

gc-098


This is the README file for Gutcheck.

gc-lama-20160810


3. "sas-format.txt" in the "doc" directory is a description of the translator output format.

gcd-20200619


# A General-Purpose Algorithm for Constrained Sequential Inference This repository contains the archived code for the CoNLL 2019 paper [A General-Purpose Algorithm for Constrained Sequential Inference](https://cogcomp.seas.upenn.edu/papers/DeutschUpRo19.pdf).

gdl-perf-20180423


This is a framework for testing the performance of Game Description Language (GDL) interpreters and reasoners used in General Game Playing. It allows for automatically running tests on a wide variety of reasoners across a wide variety of games, with minimal human intervention. It also supplies tools for analyzing the outputs of these tests.

gentoo-libbash-20190930


This is the README file for libbash

geopoint-20190720


This library expects latitude and longitude in EPSG:4326 (WGS84). To convert between different projections check out [Proj4js](http://proj4js.org//)

ggp-base-20170302


A simple Prover-based state machine implementation is included in GGP Base, so you don't need to worry about the details of converting a game description into a state machine. To write a gamer based on StateMachineGamer, derive your class from players.gamer.statemachine.StateMachineGamer. Applications like the PlayerPanel should automatically recognize your new class and it should appear in their lists of available players right away.

ggp-base-20170429


A simple Prover-based state machine implementation is included in GGP Base, so you don't need to worry about the details of converting a game description into a state machine. To write a gamer based on StateMachineGamer, derive your class from players.gamer.statemachine.StateMachineGamer. Applications like the PlayerPanel should automatically recognize your new class and it should appear in their lists of available players right away.

ggp-base-master-20160204


A simple Prover-based state machine implementation is included in GGP Base, so you don't need to worry about the details of converting a game description into a state machine. To write a gamer based on StateMachineGamer, derive your class from players.gamer.statemachine.StateMachineGamer. Applications like the PlayerPanel should automatically recognize your new class and it should appear in their lists of available players right away.

ggp-botter-20180321


GGP-Botter is a GGP Bot framework written in SWI-Prolog. It provides an interface for communication with GGP Server, as well as some helper functions (TODO) which will come in handy when creating your own bot.

ggp-botter-20190121


GGP-Botter is a GGP Bot framework written in SWI-Prolog. It provides an interface for communication with GGP Server, as well as some helper functions (TODO) which will come in handy when creating your own bot.

ggp-botter-20190127


GGP-Botter is a GGP Bot framework written in SWI-Prolog. It provides an interface for communication with GGP Server, as well as some helper functions (TODO) which will come in handy when creating your own bot.

ggp-zero-20180805


Although many games have been trained, there is a multitude of games left to try. There are some game types which are completely unsupported right now, for starters:

git-secret-20210315


`git-secret` is a bash tool which stores private data inside a git repo. `git-secret` encrypts files with permitted users' public keys, allowing users you trust to access encrypted data using pgp and their secret keys.

gitrob-20200514


Gitrob is a tool to help find potentially sensitive files pushed to public repositories on Github. Gitrob will clone repositories belonging to a user or organization down to a configurable depth and iterate through the commit history and flag files that match signatures for potentially sensitive files. The findings will be presented through a web interface for easy browsing and analysis.

gitrob-master-20160430


Gitrob is a command line tool which can help organizations and security professionals find sensitive information lingering in publicly available files on GitHub. The tool will iterate over all public organization and member repositories and match filenames against a range of patterns for files that typically contain sensitive or dangerous information.

giza-pp-1.0.1


This package contains the GIZA++ toolkit and the mkcls tool, originally written by F.J. Och and several other authors.

glulxe-0.5.2


Since this is a Glk program, it must be built with a Glk library. See the Glk home page at

gnes-20200130


This command downloads the latest GNES image (based on [Alpine Linux](https://alpinelinux.org/)) and runs it in a container. When the container runs, it prints an informational message and exits.

gnosis-utils-current-20111010


Gnosis Utilities contains several subpackages for working with XML, as well as other generally useful tools. The major modules are:

gnugo-3.9.1


This is GNU Go, a Go program. Development versions of GNU Go may be found at http://www.gnu.org/software/gnugo/devel.html. Consult TODO if you are interested in helping.

gnutrition-0.3


This is version 0.3 of GNUTRITION, a recipe and food nutritional analysis application for GNOME.

go-vncdriver-20171120


A fast VNC driver.

goedelgod-20200707


This repository contains computer-assisted formalizations of ontological proofs.

golorp-0.0.1


Welcome to Caves of Golorp, to my knowledge the only Prolog Roguelike game in existence. This is an alpha release.

google-calendar-java-api-20161230


Android Dependencies

The following are the jars from the libs folder required for Android applications:
  • google-api-client-android-1.22.0.jar (for SDK >= 2.1)
  • google-http-client-android-1.22.0.jar
The libs folder also contains properties files that specify the location of source jars for Android projects in Eclipse.
Please see the Android wiki for the Android Developer's Guide.

gophi-20210313


GOPHI (*Generation Of Parenthesized Human Input*) is a system for generating a literal reading of Abstract Meaning Representation (AMR) structures. The system, written in [SWI-Prolog](http://www.swi-prolog.org "SWI-Prolog"), uses a symbolic approach to transform the original rooted graph into a tree of constituents that is transformed into an English sentence by [jsRealB](https://github.com/rali-udem/JSrealB "GitHub - rali-udem/JSrealB: A JavaScript bilingual text realizer for web development").

gpt-1.40-src-linux-080602


This is Release 1.40 of the General Planning Tool (GPT).

gpt-2


You can read about GPT-2 and its staged release in our [original blog post](https://blog.openai.com/better-language-models/), [6 month follow-up post](https://openai.com/blog/gpt-2-6-month-follow-up/), and [final post](https://www.openai.com/blog/gpt-2-1-5b-release/).

gpt-2-20200215


You can read about GPT-2 and its staged release in our [original blog post](https://blog.openai.com/better-language-models/), [6 month follow-up post](https://openai.com/blog/gpt-2-6-month-follow-up/), and [final post](https://www.openai.com/blog/gpt-2-1-5b-release/).

gpt2-20190716


An implementation of training for [GPT2](https://openai.com/blog/better-language-models/) that supports both GPUs and TPUs. The dataset scripts are a bit hacky and will probably need to be adapted to your needs. ## Requirements For GPUs:

graphbrain-20210326


Graphbrain is an Artificial Intelligence open-source software library and scientific research tool. Its aim is to facilitate automated meaning extraction and text understanding, as well as the exploration and inference of knowledge.

gringo-4.5.4


Gringo is a grounder that, given an input program with first-order variables, computes an equivalent ground (variable-free) program. Its output can be processed further with answer set solvers like clasp, cmodels, or smodels.

grocy-20200325


## Motivation A household needs to be managed. I did this so far (almost 10 years) with my first self written software (a C# windows forms application) and with a bunch of Excel sheets. The software is a pain to use and Excel is Excel. So I searched for and tried different things for a (very) long time, nothing 100 % fitted, so this is my aim for a "complete household management"-thing. ERP your fridge!

gvgai-20170429


This is the framework for the General Video Game Competition 2014 - http://www.gvgai.net/

gwsd-1.0


GWSD is a system for Unsupervised Graph-based All-Words Word Sense Disambiguation. Please refer to (Sinha and Mihalcea, 2007) for a description of the graph-based disambiguation method, as well as for brief descriptions of all the similarity measures and the graph-centrality algorithms used by GWSD. For a quick trial of GWSD, you can use some of the pre-built graphs and feature files provided with the distribution. These graphs are stored in folders whose names clearly specify the type of the graphs (i.e. the corpus, window size, part-of-speech used, etc.). One example of such a set of graphs stored for, say, Senseval-2, a window size of 2, part of speech 'noun', and a similarity measure 'jcn', would be as follows: The set of graphs, one graph for each word to be disambiguated, will be located inside the folder 'Senseval-2.jcn.n.2.Graphs'.

ha-tpb-planner-20201010


This paper introduces an approach to human-aware epistemic planning in which a rational intelligent agent plans its actions for encouraging a human to proceed in a social virtual reality (VR) environment. In order to persuade the human user to execute specific actions, the agent adapts the virtual environment by adjusting motivators in the environment. The agent's model of the human is based on the theory of planned behavior (TPB), a cognitive theory to explain and predict human behavior. The intelligent agent manipulates the environment, a process where the agent conducts epistemic actions, i.e., adapting the environment and observing human responses, in order to understand the human's behavior and encourage human actions. An action reasoning framework is introduced that defines transitions between goal-oriented human activities in the virtual scenario. The proposed human-aware planning architecture can also be applied in environments that are not virtual, by utilizing modern mobile devices which have built-in sensors that measure motion, orientation, and various environmental conditions.

halo-20030527


Project Halo is a staged research effort by Vulcan Inc. towards the development of a Digital Aristotle. The Digital Aristotle will differentiate itself from current search engine technology in a number of important ways. It is capable of answering questions for which text currently does not exist in some document. The Digital Aristotles ability to produce user and domain appropriate justifications will promote the end user s trust that the answers generated by the application are indeed correct.

hands-20200730


This repository contains the code and data to reproduce the experiments of the paper "[Fine-grained Entity Recognition with Reduced False Negatives and Large Type Coverage](https://openreview.net/forum?id=HylHE-9p6m)".

handwritingrecognitionsystem-20181121


This repository is the Tensorflow implementation of the Handwriting Recognition System described in [Handwriting Recognition of Historical Documents with Few Labeled Data](https://www.researchgate.net/publication/325993975_Handwriting_Recognition_of_Historical_Documents_with_Few_Labeled_Data). Please cite the paper if you use this code in your research paper.

hat-0.1


This requires the diagnostics library to be available and detected by configure. In that case, the following additional builds may be performed from within any source (sub-)directory: make audit make debug make prod (equivalent to make) to build at the respective diagnostics levels. Note that the results of the audit and debug are placed in the directory build/audit, respectively build/debug, whereas make prod builds directly in the source directory. If make debug or make audit fails while running configure with an error: source directory already configured, purge the build/ directory using rm -r build/ and run the make command again.

haz-uambat-afd33d8ef811


This project aims to serve as mechanism for converting between various action theory formalisms. Possible uses include, but are not limited to,

hdrug-x86-4.334


This is the binary stand-alone runtime version of Hdrug.

helloworldenvironment-20190522


This environment creates a simple whiteboard showing messages that can be written there by the entity that it creates.

hias-20200515


The **Peter Moss Leukemia AI Research HIAS Network** is an open-source Hospital Intelligent Automation System. The system's server powers an intelligent network using a locally hosted, encrypted IoT server and proxy.

hiddenattributemodels-20200526


A Hadoop script for automatically extracting the needed messages and cleaning them is available in prepare_data/hadoop/. It expects to find reddit_comments and reddit_submission is in the user's home directory. If you opt to extract the messages yourself rather than using Hadoop, you will need to run prepare_data/clean_input_msg.py to clean the messages' text.

highlight-20200213


This file is based on the original Boost API documentation: http://www.boost.org/doc/libs/1_32_0/libs/regex/doc/syntax.html

hmm-citation-extractor-20080702


This will create a citation_cora.train file from the train/citation_cora.xml file.

hol-20190726


This is the distribution directory for the Kananaskis release of HOL4. See http://hol-theorem-prover.org for online resources.

hol-omega-kananaskis-5


This is the distribution directory for the Kananaskis release of HOL-Omega. The following is a brief listing of what's available.

hooryszeider05-20181124


This archive containes results supplementing the paper titled "Computing Unsatisfiable k-SAT Instances with Few Occurences per variable" by Shlomo Hoory and Stefan Szeider.

hrlplus-20200405


In his book *Proofs and Refutations*, Lakatos identifies seven methods by which mathematical discovery and justification can occur. These methods suggest ways in which concept definitions, conjectures and proofs gradually evolve via interaction between mathematicians. Different mathematicians may have different interpretations of a conjecture, examples or counterexamples of it, and beliefs regarding its value or theoremhood. Through discussion, concepts are refined and conjectures and proofs modified. For instance, when a counterexample is found, one might look for general properties which make it fail a conjecture, and then modify the conjecture by excluding that type of counterexample (piecemeal exclusion). Alternatively, one might generalise from the positives and then limit the conjecture to examples of that type (strategic withdrawal). Another reaction might be to deny that the object is a counterexample on the grounds that the conjecture refers to objects of a different type (monster barring). Given a faulty proof, a counterexample may be used to highlight areas of weakness in the proof, and to either modify the proof or the conjecture which it purports to prove (lemma incorporation).

hrlplus-20200816


In his book *Proofs and Refutations*, Lakatos identifies seven methods by which mathematical discovery and justification can occur. These methods suggest ways in which concept definitions, conjectures and proofs gradually evolve via interaction between mathematicians. Different mathematicians may have different interpretations of a conjecture, examples or counterexamples of it, and beliefs regarding its value or theoremhood. Through discussion, concepts are refined and conjectures and proofs modified. For instance, when a counterexample is found, one might look for general properties which make it fail a conjecture, and then modify the conjecture by excluding that type of counterexample (piecemeal exclusion). Alternatively, one might generalise from the positives and then limit the conjecture to examples of that type (strategic withdrawal). Another reaction might be to deny that the object is a counterexample on the grounds that the conjecture refers to objects of a different type (monster barring). Given a faulty proof, a counterexample may be used to highlight areas of weakness in the proof, and to either modify the proof or the conjecture which it purports to prove (lemma incorporation).

hs100-20170731


The [tp-link Wi-Fi Smart Plug model HS100](http://www.tp-link.us/products/details/HS100.html) is an embedded Linux computer with a Wifi chip, a 110/220 V AC relay with a 15 A current limit, and a US-style grounded electrical socket. You pair with it by establishing an ad-hoc network between the plug and a smartphone (also called Wifi direct). After giving your router's SSID and access information, the plug connects to it and you can control the plug with the app provided by tp-link, called Kasa. One downside of using Kasa is that it's really not much more than a wall-switch in an app, though it does have pretty rich timer features which are nice. But you can't do things like turn the light on or off in response to events on the internet. Tp-link does provide a network control mode, but you have to pass control of your plug over to them, which isn't particularly great if you endeavor to remain the master of your own domain, haha only serious.

htn-translation-20170701


HTNTranslation is a program for translating [Hierarchical Task Network](http://www.aaai.org/Papers/AAAI/1994/AAAI94-173.pdf) problems into [PDDL](http://www.jair.org/media/1129/live-1129-2132-jair.pdf). This is an extension of the work described in "[Translating HTNs to PDDL](http://www.umiacs.umd.edu/publications/translating-htns-pddl-small-amount-domain-knowledge-can-go-long-way)," handling both totally ordered and partially ordered subtasks.

http-proxy-20200516


This module is a pure Perl HTTP proxy.

hyperfoods-20210508


A vectorial representation for every ingredient and recipe was generated using Word2Vec. An SVC model was trained to return recipes’ cuisines from their set of ingredients. South Asian, East Asian and North American cuisines were predicted with more than 73% accuracy. African, Southern European and Middle East cuisines contain the highest number of cancer-beating molecules. Finally, it was developed a web application able to predict the ingredients from an image, suggest new combinations and retrieve the cuisine the recipe belongs, along with a score for the expected number of negative interactions with antineoplastic drugs (github.com/warcraft12321/HyperFoods).

igor-2.0.8


./LICENSE -- the license files ./README -- this readme file ./igor2.cabal -- Cabal package description ./Setup.hs -- Cabal package installation ./expl/batch.txt -- a batch file example ./expl/Examples.hs -- some example specifications ./src/* -- the source files (see APPENDIX for a complete list) I. Introduction --------------------------- Igor2 is an inductive programming system, which generalises over given I/O examples of some target functions and constructs a solution which is complete and correct w.r.t the given examples. Given the type and some equations of e.g. the function 'last' as Haskell code

ilfwn-20110820


This avoids assigning the same offset to different synsets. For example, both "able.a.01" and "entity.n.01" would share the offset 1740, whereas in ILF-WN they have assigned 300001740 and 100001740 respectively.

ilias-lt4el-m30


2. As ILIAS admin, go to the 'Administration >> Authentication and Registration' options and click on the link for the 'Shibboleth' settings. 3. Activate the "Enable Shibboleth Support" checkbox on the top. After defining the default user role for new users registering via Shibboleth and the name of the Shibboleth federation this service is part of, you have to define whether the Shibboleth users shall select their home organization directly on the ILIAS login page or on an external page. If you have chosen to use the ILIAS WAYF, you have to make sure that Shibboleth is configured to have a default applicationId for the element and that the default Shibboleth handlerURL is configured to be "/Shibboleth.sso", which usually is the default setting for Shibboleth. To check that, open the shibboleth.xml configuration file and lookg for the element, which must have an attribute 'applicationId', e.g. applicationId="default". If you don't want to use the default session initiator (for example because your ILIAS installation is part of several federation), you can specify a location of a session initiator for a Identity Provider as a third argument. The session inititors can be found in the shibboleth.xml configuration file as well. If you chose to use an external WAYF, fill in an URL to an image that is to be used for the login button. Default ist 'images/shib_login_button.gif' The login instructions can be used to place a message for Shibboleth users on the login page. These instructions are independent from the current language the user has chosen. Read below what you can use the data manipulation file for. 4. Fill in the fields of the form for the attribute mapping. You need to provide the names of the environment variables that contain the Shibboleth attributes for the unique ID, firstname, surname, etc. This e.g. could be 'HTTP_SHIB_PERSON_SURNAME' for the person's last name. Refer to the Shibboleth documentation or the documentation of your Shibboleth federation for information on which attributes are available. Especially the field for the 'unique Shibboleth attribute' is of great importance because this attribute is used for the user mapping between ILIAS and Shibboleth users. ############################################################################# Shibboleth Attributes needed by ILIAS: For ILIAS to work properly Shibboleth should at least provide the attributes that are used as firstname, lastname and email in ILIAS. Furthermore, you have to provide an attribute that contains a unique value for each use. This could e.g. also be the users emailaddress. This unique attribute is needed to map the ILIAS user name to a certain Shibboleth user. #############################################################################

illness-index-2015.04


This is a prototype program.

im2latex-dataset-20181202


- The end result should have two files and one directory (names can be changed in `formula2image.py`: - `im2latex.lst` - Each line is in format `formula_idx image_name render_type` - formula_idx is the line number where formula is in `im2latex_formulas.lst` - image_name is the name of the image connected to this rendering (without '.png') - render_type is the name of render setup used, defined in `formula2image.py` - `im2latex_formulas.lst` - Each line contains one formula - `/formula_images` - Directory where images are stored

im2markup-20170611


A general-purpose, deep learning-based system to decompile an image into presentational markup. For example, we can infer the LaTeX or HTML source from a rendered image.

im2recipe-20210416


This repository contains the code to train and evaluate models from the paper: _Learning Cross-modal Embeddings for Cooking Recipes and Food Images_

imagematting-0.1


This application requires the user to define a rough boundary of an object with a large brush. During the drawing, a high-quality segmentation is generated, interactively. Problematic areas can still be edited manually by the user.

implie-20160616


IMPLIE (IMPLicit relation Information Extraction) is a program that extracts binary relations from English sentences where the relationship between the two entities is not explicitly stated in the text. IMPLIE supports the following target relations out-of-the-box: *has nationality*, *has job title*, *has province*, *has city*, and *has religion*. However, other relations can be supported by providing a list of keywords for a new target relations. This is possible because IMPLIE uses a target independent syntactic language model.

imps-2.0


IMPS 2.0 Copyright (c) 1990-2005 The MITRE Corporation Authors: W. M. Farmer, J. D. Guttman, F. J. Thayer Contents: A. Introduction B. IMPS Web Site C. How to Install IMPS D. How to Start IMPS E. How to Convert from IMPS 1.2 to IMPS 2.0 F. Questions, Comments, and Bug Reports G. Acknowledgments A. Introduction IMPS is an Interactive Mathematical Proof System developed at The MITRE Corporation. The IMPS system is available without fee on the Web under the terms of a public license (see section B below). IMPS 2.0, which is written in Common Lisp, runs on Unix platforms with at least 16 or more MB physical memory. IMPS 2.0 should work with most versions of Common Lisp; we support Allegro CL, CLISP, and CMU Common Lisp. We prefer CLISP: it produces small executables, is well-supported, and is available at http://clisp.sourceforge.net/ without fee. (Use CLISP 2.29 instead of CLISP 2.33.) We have successfully run IMPS 2.0 with these versions of Common Lisp on SunOS, Sun Solaris, and Linux. IMPS 2.0 runs under the X Window System and has an Emacs-based interface; we support primarily the XEmacs version of Emacs. The older IMPS 1.2, which is written in the T programming language, runs only on Sun 4 SPARCstations. IMPS 1.2 is no longer being developed or supported and should be considered as obsolete. Users of IMPS 1.2 who want to convert to IMPS 2.0, should read Section E below. IMPS is intended to provide organizational and computational support for the traditional techniques of mathematical reasoning. In particular, the logic of IMPS allows functions to be partial and terms to be undefined. The system consists of a database of mathematics (represented as a network of axiomatic theories linked by theory interpretations) and a collection of tools for exploring, applying, extending, and communicating the mathematics in the database. One of the chief tools is a facility for developing formal proofs. In contrast to the formal proofs described in logic textbooks, IMPS proofs are a blend of computation and high-level inference. Consequently, they resemble intelligible informal proofs, but unlike informal proofs, all details of an IMPS proof are machine checked. B. IMPS Web Site The welcome page for IMPS Web site is at http://imps.mcmaster.ca It includes links to: 1. The IMPS system (README, public license, and tar files). 2. The IMPS User's Manual in HTML, PostScript, and PDF formats. It is approximately 300 pages long. Some parts of it refer to IMPS 1.2 and are thus out of date for IMPS 2.0. 3. Technical papers on IMPS in PostScript and PDF formats. 4. The IMPS Mailing List. 5. A hypertext presentation of the IMPS Theory Library. The presentation allows one to explore this body of mathematics by going, for example, from the name of a constant used in a proof to the constant's definition or from the proof of a theorem to the specification of the theory in which the theorem was proved. C. How to Install IMPS 1. Choose a directory somewhere in your file system where you would like to put the IMPS system. You will need about 30 MB of space. (More space may be needed for certain versions of Common Lisp.) Let us refer to this directory as /.../dir. Execute (the shell command) cd /.../dir 2. Move the file "imps-2.0.tar.gz" to the /.../dir directory. Then execute the following commands: gunzip imps-2.0.tar.gz tar -xvf imps-2.0.tar Each of these operations will take about a minute. After they are done, you may delete the file imps-2.0.tar or recompress it and put it wherever you want. 3. Choose what version of Emacs and Common Lisp you would like to use. We recommend XEmacs and Allegro CL, CLISP, or CMU Common Lisp. Other versions of Emacs and Common Lisp will work, but you may have to make a few modifications to the IMPS system. 4. Edit the file "install" which is found in /.../dir/imps. Towards the top of the file are the four lines: EMACS=`which emacs` CL=`which clisp` LISP=clisp GAWK=`which gawk` If you leave the file as is, IMPS will be installed with your system Emacs, CLISP, and gawk. (Make sure these three programs are available on your system.) If you would like to use another version of Emacs (e.g., if XEmacs is not your system Emacs), change the first line to EMACS= Warning: XEmacs is the only version of Emacs that we are fully supporting. If you would like to use Allegro CL 4, Allegro CL 5, or CMU Common Lisp instead of CLISP, change the second and third lines, respectively, to CL= LISP=cmu The install script assumes that you have gawk installed on your system. If you would like to use nawk instead of gawk, change the fourth line to GAWK= 5. Finally, execute (the shell command) /.../dir/imps/install This will cause the compilation of all the IMPS source files and will produce an executable. (You may ignore the many warning messages that are printed.) Depending on the version of Common Lisp you use, this may take from a few minutes for CLISP to about 30 minutes for CMU Common Lisp. D. How to Start IMPS To run IMPS, start X Windows and then execute /.../dir/imps/bin/start_imps & This will start up IMPS running in an XEmacs window. The default XEmacs settings for color, fonts, etc. may be changed by editing the file /.../dir/imps/el/imps-emacs.el E. How to Convert from IMPS 1.2 to IMPS 2.0 The IMPS Main Theory Library is exactly the same for both IMPS 1.2 and IMPS 2.0, so converting from IMPS 1.2 to IMPS 2.0 should not be very difficult. Since different versions of Common Lisp use different hash functions, some proof scripts produced with IMPS 1.2 will break when they are executed with certain versions of Common Lisp. However, these broken proof scripts are usually very easy to repair. All the proof scripts in the IMPS Main Theory Library work correctly when IMPS is run with CLISP. F. Questions, Comments, and Bug Reports Questions and comments about IMPS can be mailed to imps-questions@imps.mcmaster.ca Please mail information about bugs or problems with using IMPS to imps-bugs@imps.mcmaster.ca G. Acknowledgments IMPS was designed and developed at The MITRE Corporation under the MITRE-Sponsored Research program. Ronald D. Haggarty, former MITRE Vice President of Research and Technology, deserves special thanks for his strong, unwavering support of the IMPS project. Several of the key ideas behind IMPS were originally developed by Dr. Leonard Monk on the Heuristics Research Project, also funded by MITRE-Sponsored Research, during 1984-1987. We would like to thank the Harvard Mathematics Department, and professor David Mumford (now at Brown) in particular, for providing the original FTP site for IMPS. The core and support machinery of IMPS 1.2 was written in the T programming language, developed at Yale by N. Adams, R. Kelsey, D. Kranz, J. Philbin, and J. Rees. IMPS 2.0 was created by J. Thayer by producing a macro-emulation of the T programming language in Common Lisp which can execute a suitably translated version of the original IMPS source code. The IMPS user interface is written in the GNU Emacs programming language, developed by R. Stallman.

indri-2.7


To support querying an Indri repository within the UIMA framework, we have developed an SIAPI compliant query processor, suitable for use as a drop in replacement in Semantic Search applications. This component supports the Indri structured query language. The component includes an SIAPI implementation factory, SearchService, Searchable, Query, and Result interface implementations. The IndriSearch application is a modification of the UIMA example SemanticSearch application that uses the Indri Searchable. The GUI version could be modified in a similar fashion. Future work will add an Annotator for query results.

inductorparser-20210501


Inductor Parser =============== The Inductor Parser is a simple-to-use C++ Template-based parser. It is small and easy to understand, debug and extend.

inductorprolog-20210501


The following features are for sure *not* in the Inductor Prolog engine (this is not an exhaustive list): - asserting or retracting anything besides a fact - declaring a function as dynamic like `dynamic(myRule/1)`: Anything can be changed in IndProlog, this declaration is not necessary - `;` (or) - `->` (if) - syntax like `a == b` instead of `==(a, b)` - `"` inside comments. Use `"This is a quote 'inside another quote' "` instead - Any Metaprogramming features or rules like `call`

indywiki-0.9.7


We have included windows executables, in order for windows users to be able to run the program without having to get python, pyqt and qt. However, by making this an executable (using py2exe program) , it contains python+pyqt4 on modules as windows DLL's, that's why the size is extremely big (~23mb). Download the windows zip, unzip it somewhere and click on the indywiki icon for the program to launch. As an alternative, you can download and install python, qt, sip and pyqt4 modules and run the code, instead of the executable. Also, keep in mind that indywiki is not tested extensively on windows , since the development takes place on linux.

inference-20080216


You have created a new directory inference. Within this directory, you can compile by using "make Makefile inference" In addition to the .cc and .h files, the directory contains 1. a short description of the program under description.tex 2. short examples of XML documents (with extensions .xml and .xgf) 3. distinguishing functions fdisti 4. input samples for the regular learning mode in files sample*

instinct-server-20190108


This is a Java command line application encapsulated within an Eclipse project. It provides a TCP/IP based server for communication with the [R5 Robot], and within it the Instinct Planner. The R5 Robot also requires the [Instinct Planner].

inversecooking-20210511


This code uses Python 3.6 and PyTorch 0.4.1 cuda version 9.0.

isabelle-2020


This is Isabelle2020: April 2020.

isabelle2021-linux-20210417


This is Isabelle2021: February 2021.

itsimple-3.5.10


This file is part of itSIMPLE.

itsimple4.0-beta3


This file is part of itSIMPLE.

ix-20210503


I-X is a systems integration architecture that supports multi-agent cooperation on synthesis tasks such as design, configuration and planning; and it is

Copyright (C) 2000 - 2010, AIAI, The University of Edinburgh

jack-rack-1.4.7


JACK Rack is a LADSPA effects rack for the JACK audio API. It uses GTK+ 2 (and optionally GNOME 2) for the GUI. LADSPA version 1.1 is needed. In order to save rack configurations, libxml2 is needed.

jadex-2.0-rc10


This library is free software; you can redistribute it and/or

jason-1.3.5


Jason is an interpreter for an extended version of AgentSpeak First release: December 2003. Jason is distributed under LGPL (see file LICENSE).

jason-1.3.6a


Jason is an interpreter for an extended version of AgentSpeak First release: December 2003. Jason is distributed under LGPL (see file LICENSE).

jason-20180913


Jason is an interpreter for an extended version of AgentSpeak. It implements the operational semantics of that language, and provides a platform for the development of multi-agent systems, with many user-customisable features. Jason is available as Open Source, and is distributed under GNU LGPL.

jason-20190518


Jason is an interpreter for an extended version of AgentSpeak. It implements the operational semantics of that language, and provides a platform for the development of multi-agent systems, with many user-customisable features. Jason is available as Open Source, and is distributed under GNU LGPL.

javapengine-20200108


A Java language client for Torbjörn Lager's _Pengines_ distributed computing library for _[SWI-Prolog](http://swi-prolog.org)_ .

javapengine-20200506


A Java language client for Torbjörn Lager's _Pengines_ distributed computing library for _[SWI-Prolog](http://swi-prolog.org)_ .

jbt-20200202


JBT is a Java framework for building and running behaviour trees. In the past few years, behaviour trees have been widely accepted as a tool for defining the behaviour of video games characters. However, to the best of our knowledge, there is no free-software Java implementation of such concept. With JBT we intend to provide a solid framework to build and run behaviour trees in Java.

jsrealb-20210313


**Natural Language Generation (NLG)** is a field of artificial intelligence that focuses on the development of systems that produce text for different applications, for example the textual description of massive datasets or the automation of routine text creation.

kaggle-jigsaw-multilingual-toxic-comment-classification-3rd-place-solution


WARNING! Do no install pytorch-xla-env-setup.py before starting TF code. In this case there is an incompatibility in using TPU via TF and via PyTorch in the same instance runtime. The valid sequence of running (including install packages) is in ./train.py and ./inference.py.

kansas-lava-0.2.4


Kansas Lava is a Haskell library which allows the specification and simulation of hardware, and hardware level concerns. Haskell functions written in Kansas Lava can be interpreted as having the semantics of a specific circuit, or compiled into VHDL, for compilation and synthesis using standard HDL tools.

kbptoolkit-1.5.0


=============================== Major directory list =============================== src/ ....................... source codes bin/ ....................... class files doc/ ....................... includes this readme lib/ ............. includes required third-part packages evaluation/ ...................... contains some example queries output/ ........................ contains output files for example queries props/ ....................... property files defining major parameters in the toolkit scripts/ ...................... sample scripts for running the IE toolkit res/ ...................... resources used in the toolkit modules/ ...................... contains a name tagger developed by Qi Li component.env...................... environment variables used the toolkit kbptoolkit.sh..................... main script of the toolkit build.xml ....................... configure file needed to build the toolkit using ant

kbptoolkit-cuny-20140515


This toolkit is to provide KBP2010 participants with a light weighted tool for retrieving relevant documents

kerkerkruip-20180923


Kerkerkruip is a short-form roguelike in the interactive fiction medium, featuring meaningful tactical and strategic depth, innovative game play, zero grinding, and a sword & sorcery setting that does not rehash tired clichés.

kml-20080701


This step should be performed after 'make install' step of cource. Just type:

koordinator2000-20200410


For example, you would vote that tiny progressive political party, if you knew your vote would matter. So let's get to work to make it matter. Don't waste your vote until you know there is a mass large enough to make it count.

kparser-20201003


Knowledge Parser or K-Parser or Kparser is a semantic parser that translates any English sentence into a directed acyclic semantic graph. The nodes in the graph represent the actual words in the input text and the conceptual classes of those words. The edges represent the dependency between different nodes and the edge labels in the graph represent the semantic relations between the nodes.

langpro-20210406


# [LangPro](https://github.com/kovvalsky/LangPro): Natural [Lang](https://github.com/kovvalsky/LangPro)uage Theorem [Pro](https://github.com/kovvalsky/LangPro)ver LangPro is a tableau-based theorem prover for natural logic and language. See the [online demo](https://naturallogic.pro/LangPro/) (not the latest version).

lbtt-1.2.1


lbtt is a tool for testing programs that translate formulas expressed in propositional linear temporal logic (LTL) into Bchi automata. The goal of the tool is to assist implementing LTL-to-Bchi translation algorithms correctly by providing an automated testing environment for LTL-to-Bchi translators. Additionally, the testing environment can be used for very basic profiling of different LTL-to-Bchi translators to evaluate their performance.

ld41-20190419


This is our entry for Ludum Dare 41, a silly text based minesweeper game.

leafnats-20200419


This playground is a pytorch implementation of a learning framework for implementing different models for the neural abstractive text summarization and beyond. It is an extension of [NATS](https://github.com/tshi04/NATS) toolkit, which is a toolkit for Neural Abstractive Text Summarization. The goal of this framework is to make it convinient to try out new ideas in abstractive text summarization and other language generation tasks.

lean-mode-20191117


This is the Emacs mode for the [Lean theorem prover][lean].

legoeval-20210506


![](https://github.com/yooli23/LEGOEval/blob/master/banner.png) # LEGOEval LEGOEval is a toolkit for dialogue system evaluation via crowdsourcing, see our [demo video](https://www.youtube.com/watch?v=Dg6mafRGOpg&ab_channel=JoshArnold).

lemur-4.7


To support querying an Indri repository within the UIMA framework, we have developed an SIAPI compliant query processor, suitable for use as a drop in replacement in Semantic Search applications. This component supports the Indri structured query language. The component includes an SIAPI implementation factory, SearchService, Searchable, Query, and Result interface implementations. The IndriSearch application is a modification of the UIMA example SemanticSearch application that uses the Indri Searchable. The GUI version could be modified in a similar fashion. Future work will add an Annotator for query results.

leo-iii-1.0


In the Leo-III project, we design and implement a state-of-the-art Higher-Order Logic Theorem Prover, the successor of the well known LEO-II prover [[2](http://dx.doi.org/10.1007/978-3-540-71070-7_14)]. Leo-III will be based on ordered paramodulation/superposition. In constrast to LEO-II, we replace the internal term representation (the commonly used simply typed lambda-calculus) by a more expressive system supporting type polymorphism. In order to achieve a substantial performance speed-up, the architecture of Leo-III will be based on massive parallelism (e.g. And/Or-Parallelism, Multisearch) [[3](http://dx.doi.org/10.1023/A:1018932114059)]. The current design is a multi-agent blackboard architecture that will allow to independently run agents with our proof calculus as well as agents for external (specialized) provers. Leo-III will focus right from the start on compatibility to the widely used TPTP infrastructure [[8](http://dx.doi.org/10.1007/s10817-009-9143-8)]. Moreover, it will offer built-in support for specialized external prover agents and provide external interfaces to interactive provers such as Isabelle/HOL [[5](http://dx.doi.org/10.1007/3-540-45949-9)]. The implementation will excessively use term sharing [[6](http://dl.acm.org/citation.cfm?id=1218621), [7](http://dl.acm.org/citation.cfm?id=1218620)] and several indexing techniques [[4](dx.doi.org/10.1007/3-540-45744-5_19), [9](dx.doi.org/10.1007/978-3-540-71070-7_14)]. Leo-III will also offer special support for reasoning in various quantified non-classical logics by exploiting a semantic embedding [[1](dx.doi.org/10.5220/0004324803460351)] approach.

libreoffice-impress-templates-20191029


For example, the `libreoffice-templates` package (description: "Additional set of templates for LibreOffice") that is available in Ubuntu, only contains the 8 default templates that come with LibreOffice itself. Installing this package thus has no effect on the templates available to the user in Impress, and no other template packages appear to be available.

lightside-20190602


The LightSide Researcher's Workbench is an open-source text-mining tool released under the GNU General Public License.

lillybot-0.1


Lillybot is an OpenCyc-based irc chatbot. It implements a very simple reasoning engine that works with the OpenCyc ontology, and hooks it to a natural-language parser. It can answer simple english questions with small simple-english replies.

limboole-0.2


This is a simple boolean calculator. It reads a boolean formula and checks whether it is valid. In case '-s' is specified satisfiability is checked instead of validity.

limboole1.1-20181124


This is a simple boolean calculator. It reads a boolean formula and checks whether it is valid. In case '-s' is specified satisfiability is checked instead of validity (tautology).

linguist-20180630


An AI running on [NuPIC](https://github.com/numenta/nupic) using the CLA to build a model of language, and predict the rest of a user's word, phrase, sentence.

link-grammar-4.3.4


This directory contains a *patched* version of the final original release of the Link Grammar Parser. It has been patched to fix a few bugs, add a few enhancements, and, in general, make the Link Grammar Parser easier to use. This version includes Java bindings.

linkipedia-20160505


Data/ : this directory contains all required system files/data Linkipedia.jar : the runnable jar file Linkipedia_lib : all external jar libaries (please be aware of their license issue) bio_index/ : the index of more than 300 ontologies from BioPortal index/ : DBpedia 3.9 index src.tgz: source code (you may not need this to run the service)

llama-20210418


LLAMA is a graph storage and analysis system that supports mutability and out-of-memory execution built on top of the compressed sparse row (CSR) representation. Its goal is to perform comparably to immutable main-memory analysis systems for graphs that fit in memory and to match or outperform existing out-of-memory analysis systems for graphs that exceed main memory.

logicmoo-nlu-20200127


This NLU/NLG ToolKit uses the following projects into a usable pipeline

logseq-20210129


[![latest release version](https://img.shields.io/github/v/release/logseq/logseq)](https://github.com/logseq/logseq/releases) [![License](https://img.shields.io/github/license/logseq/logseq?color=blue)](https://github.com/logseq/logseq/blob/master/LICENSE.md) [![Twitter follow](https://img.shields.io/badge/follow-%40logseq-blue.svg?style=flat&logo=twitter)](https://twitter.com/logseq) [![discord](https://img.shields.io/discord/725182569297215569?label=discord&logo=Discord&color=blue)](https://discord.gg/KpN4eHY) [![total](https://opencollective.com/logseq/tiers/badge.svg?color=blue)](https://opencollective.com/logseq)

logtalk-3.09.0


This file is part of Logtalk Copyright 1998-2016 Paulo Moura

lparse-1.1.2


Lparse is a front end for smodels system, that takes a domain-restricted logic program as its input and produces a ground logic program as its output. This program is distributed under GNU Public Licence, see file COPYING for details.

lpg-1.2


This code is provided for research and experimental purposes only

lps-corner-20200419


“Logic-based Production System" is a new computer language that combines the characteristics of an imperative programming language with those of a declarative database and knowledge representation language. It is the result of over a decade of research led by [Bob Kowalski](https://www.doc.ic.ac.uk/~rak/) and [Fariba Sadri](https://www.doc.ic.ac.uk/~fs/) at [Imperial College London](http://lps.doc.ic.ac.uk).

lps-corner-20200929


[Logical Contracts Server](http://logicalcontracts.com/server/), maintained elsewhere, is a proprietary extension to lps.swi.

lsdsem2017-story-cloze-20180520


This repository contains the code needed to reproduce the results reported in Bugert et al., *LSDSem 2017: Exploring Data Generation Methods for the Story Cloze Test*.

lt4el-1.m30


The base of LPC is a configuration file (lpc/lpc.xml) that specifies which tools may be used, which input format is required and what kind of output format is produced. We can describe the whole system as a general directed graph, where every vertex is assigned one file format (we use MIME types for format description) and a set of edges represent various conversion tools. In fact we use a hypergraph: we parametrize every edge by another two labels: language and cost, so that several edges (tools) can be used for conversion from one format into another.

ltl2dstar-0.5.4


The src/boost directory contains header files from the Boost c++ libraries (v.1.57.0): http://www.boost.org/

ltsmin-2.1


It is a good idea to check the output of ./configure, to see whether all dependencies were found.

lua-signal-20180804


This is a signal library for Lua 5.1. It depends on ANSI C signals and has some extensions that are available in POSIX, such as kill().

lucene-7.5.0


Lucene is a Java full-text search engine. Lucene is not a complete application, but rather a code library and API that can easily be used to add search capabilities to applications.

lucida-20171127


Lucida is a speech and vision based intelligent personal assistant inspired by [Sirius](http://sirius.clarity-lab.org). Visit [our website](http://lucida.ai) for tutorial, and [Lucida-users](http://groups.google.com/forum/#!forum/lucida-users) for help. The project is released under [BSD license](LICENSE), except certain submodules contain their own specific licensing information. We would love to have your help on improving Lucida, and see [CONTRIBUTING](CONTRIBUTING.md) for more details.

ludii-20210529


Ludii is a general game system being developed as part of the [ERC-funded Digital Ludeme Project (DLP)](http://ludeme.eu/). This repository hosts the publicly available source code for Ludii. A precompiled build (Ludii.JAR) can be downloaded from [Ludii's downloads page](https://ludii.games/download.php).

ludiiai-20210529


This repository is now deprecated; all AI source code for Ludii is included in the main open-source Ludii repo at https://github.com/Ludeme/Ludii.

ludiiaicompetition-20210529


This repository, as well as the [Ludii Example AI repository](https://github.com/Ludeme/LudiiExampleAI), are written for the latest public pre-release of Ludii available at the time of this writing: **Ludii 0.9.3**. **This is the version of Ludii that we will use for the AI competition at CoG 2020**. We do plan to release newer versions of Ludii in between, but the API may not remain 100% the same. Therefore **we now fix the version that will be used for the competition at CoG 2020 at 0.9.3**. --> ---

magic-1.0


If you are reading this file, then you have most probably obtained and installed a distribution of MAGIC. In the rest of this document we will assume that the root of the MAGIC distribution is a directory called MAGICDIR. For example, suppose you obtained MAGIC from the CVS repository by typing the following commands:

mallet-2.0.8


MALLET is a Java-based package for statistical natural language processing, document classification, clustering, topic modeling, information extraction, and other machine learning applications to text.

marelle-20210505


This will install marelle for all users, putting the executable in `/usr/local/bin/marelle`.

margo-1.1


This README is very sparse, look for more details at the doc directory if this is insufficient.

margo-20120715


This README is very sparse, look for more details at the doc directory if this is insufficient.

master-thesis-20210513


This is my master's thesis with presentation slides.

mat2vec-20190706


1. Make sure you have `python3.6` and the `pip` module installed. We recommend using [conda environments](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html). 1. Navigate to the root folder of this repository (the same folder that contains this README file) and run `pip install -r requirements.txt`. Note: If you are using a conda env and any packages fail to compile during this step, you may need to first install those packages separately with `conda install package_name`. 1. Wait for all the requirements to be downloaded and installed. 1. Run `python setup.py install` to install this module. This will also download the Word2vec model files. If the download fails, manually download the [model](https://storage.googleapis.com/mat2vec/pretrained_embeddings), [word embeddings](https://storage.googleapis.com/mat2vec/pretrained_embeddings.wv.vectors.npy) and [output embeddings](https://storage.googleapis.com/mat2vec/pretrained_embeddings.trainables.syn1neg.npy) and put them in mat2vec/training/models. 1. Finalize your chemdataextractor installation by executing ``cde data download`` (You may need to restart your virtual environment for the cde command line interface to be found). 1. You are ready to go!

mathlib-20191203


[Mathlib](https://leanprover-community.github.io) is a user maintained library for the [Lean theorem prover](https://leanprover.github.io). It contains both programming infrastructure and mathematics, as well as tactics that use the former and allow to develop the later.

maxtract-20140201


A command line tool that reads a PDF and returns different formats. Tool is written in Ocaml and uses the pdftk for decompressingg the PDF file.

mc-aixi-20170705


This software package consists of a simple implementation of MC-AIXI-CTW, an intelligent agent that learns from experience how to perform well in a wide variety of environments. This includes, but is not limited to the example games provided in this package, such as Tic Tac Toe, Pacman, and Kuhn Poker.

mcapl-20190326


This software distribution consists of:

megatron-lm-20210305


[Megatron](https://arxiv.org/pdf/1909.08053.pdf) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor and pipeline), and multi-node pre-training of [GPT](https://arxiv.org/abs/2005.14165) and [BERT](https://arxiv.org/pdf/1810.04805.pdf) using mixed precision.

mepk-20190304


# Multi-agent Epistemic Planner Kit This is a planner for multi-agent epistemic planning. This code is continuously updated. We are planning to release a brand new version of MEPK and more details about it will be presented. You are welcome to follow this work.

mepk-20201010


# Multi-agent Epistemic Planner with Knowledge This is a planner for multi-agent epistemic planning. This code is continuously updated. We are planning to release a brand new version of MEPK and more details about it will be presented. You are welcome to follow this work.

meta-aqua-20191116


This version of Meta-AQUA is also used for running MIDCA and a

meta-dataset-20200725


This repository contains accompanying code for the article introducing Meta-Dataset, [arxiv.org/abs/1903.03096](https://arxiv.org/abs/1903.03096).

metatem-0.2.2


This software is an implementation of the agent programming language MetateM [Fisher et al.] in which agents are specified using a declarative language of temporal logic rules and meta-statements. Multiple agent specifications are interpreted asynchronously and agents are able to communicate by message passing.

meteor-0.6


METEOR is a system that automatically evaluates the output of machine

mibanda-20170510


This is a pure Python library to access the Xiaomi Mi Band. It uses

microrts-20210529


microRTS is a small implementation of an RTS game, designed to perform AI research. The advantage of using microRTS with respect to using a full-fledged game like Wargus or StarCraft (using BWAPI) is that microRTS is much simpler, and can be used to quickly test theoretical ideas, before moving on to full-fledged RTS games.

mincutseg-20071226


This package contains the source code and binaries for the Minimum Cut text

mindigolog-2.0.9


This is a MIndiGolog interpreter implemented using Mozart/Oz. It was developed as part of Ryan Kelly's PhD thesis "Asynchronous Multi-Agent Reasoning in the Situation Calculus". Further details are available at:

mindigolog2-0.9.9


This is a MIndiGolog interpreter implemented using Mozart/Oz. It was developed as part of Ryan Kelly's PhD thesis "Asynchronous Multi-Agent Reasoning in the Situation Calculus". Further details are available at:

mindraider-0.512


This program is released under GPL license and comes with no warranty.

minipar-0.5


A royalty-free license is granted for the use of this software for NON_COMMERCIAL PURPOSES ONLY.

mistral-1.1


This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

mizar-8.1.0-8.5.50


This version of the Mizar system has been precompiled with Free Pascal Compiler for i386 compatible machines running the Linux operating system. The package contains the Mizar processor, the Mizar database, a set of utility programs, and GNU Emacs Lisp mode for convenient work with the system. All Mizar articles constituting the Mizar Mathematical Library (MML) and their abstracts are also included in this release.

mizar-8.1.0.3.5.23.1213


This version of the Mizar system has been precompiled with Free Pascal Compiler (ver. 2.4.2) for i386 compatible machines running the Linux operating system. The package contains the Mizar processor, the Mizar database, a set of utility programs, and GNU Emacs Lisp mode for convenient work with the system. All Mizar articles constituting the Mizar Mathematical Library (MML) and their abstracts are also included in this release.

mizar-8.1.05_5.37.1275


This version of the Mizar system has been precompiled with Free Pascal Compiler (ver. 2.4.2) for i386 compatible machines running the Linux operating system. The package contains the Mizar processor, the Mizar database, a set of utility programs, and GNU Emacs Lisp mode for convenient work with the system. All Mizar articles constituting the Mizar Mathematical Library (MML) and their abstracts are also included in this release.

mlj19-iggp-20210522


This repository consists of the code used to run the experiment and three zip files:

mojo-discord-20210125


This is a set of Perl Modules designed to implement parts of the Discord public API, build on Mojo::UserAgent and Mojo::IOLoop.

mojo-pg-20200425


A tiny wrapper around [DBD::Pg](https://metacpan.org/pod/DBD::Pg) that makes [PostgreSQL](https://www.postgresql.org) a lot of fun to use with the [Mojolicious](https://mojolicious.org) real-time web framework.

morbig-20191117


Morbig is a parser for shell scripts written in the POSIX shell script language. It parses the scripts statically, that is without executing them, and constructs a concrete syntax tree for each of them. The concrete syntax trees are built using constructors according to the shell grammar of the POSIX standard.

mpich-1.2.6


MPICH is an open-source, portable implementation of the Message-Passing Interface Standard. It contains a complete implementation of version 1.2 of the MPI Standard and also significant parts of MPI-2, particularly in the area of parallel I/O.

mppp-0.9


mp++ is a C++11 library for multiprecision arithmetic, currently supporting arbitrary-precision integers, rationals and floats, and quadruple-precision floats.

mppp-20200120


mp++ is a C++11 library for multiprecision arithmetic, currently supporting arbitrary-precision integers, rationals and floats, and quadruple-precision floats.

mprolog-2.0


This version has been tested using SICStus Prolog 4.0.2 and SWI-Prolog

mrsbs-1.1


MRSBS is a system for coordinating the scheduling of meetings.

msrte-20080220


This archive contains encoded versions of the logical-form structures

mulval-20190510


%MulVAL is an cybersecurity reasoning engine that can be applied on top of multiple contexts (cloud, IoT, enterprise network, etc )

murphi3.1-20181124


Murphi is an explicit state protocol verifier that consists of * the Murphi Compiler, which translates the Murphi source file describing a protocol into C++, and * the Murphi Verifier, which is a collection of C++ include files and contains the core state enumeration algorithms.

mustru-0.2


This is the first release of Mustru (Version 0.1). It is a

muzero-general-20201224


A commented and [documented](https://github.com/werner-duvaud/muzero-general/wiki/MuZero-Documentation) implementation of MuZero based on the Google DeepMind [paper](https://arxiv.org/abs/1911.08265) (Nov 2019) and the associated [pseudocode](https://arxiv.org/src/1911.08265v2/anc/pseudocode.py). It is designed to be easily adaptable for every games or reinforcement learning environments (like [gym](https://github.com/openai/gym)). You only need to add a [game file](https://github.com/werner-duvaud/muzero-general/tree/master/games) with the hyperparameters and the game class. Please refer to the [documentation](https://github.com/werner-duvaud/muzero-general/wiki/MuZero-Documentation) and the [example](https://github.com/werner-duvaud/muzero-general/blob/master/games/cartpole.py).

mysql-connector-odbc-5.3.9


This is a release of MySQL Connector/ODBC (formerly MyODBC), Oracle's dual-license ODBC Driver for MySQL. For the avoidance of doubt, this particular copy of the software is released under the version 2 of the GNU General Public License. MySQL Connector/ODBC is brought to you by Oracle.

namas-20200419


This project contains the Abs. neural abstractive summarization system from the paper

narchy-20190131


**Tasks** can arrive at any time. There are no restrictions on their content as far as they can be expressed in __Narsese__ (the I/O language of NARS). - By default, NARS makes *no assumptions* about the meaning or truth value of input beliefs and goals. - How to choose proper inputs and interpret possible outputs for each application is an *open problem* to be solved by its users. :warning:

neural-drs-20210315


This folder contains scripts to use our neural seq2seq model to produce DRSs. It contains code to reproduce either our [TACL paper](https://www.aclweb.org/anthology/Q18-1043.pdf), our [IWCS paper](https://www.aclweb.org/anthology/W19-0504/) or our [EMNLP paper](https://www.aclweb.org/anthology/2020.emnlp-main.371.pdf). The models rely on [OpenNMT](http://opennmt.net/), [Marian](https://marian-nmt.github.io/) and [AllenNLP](https://allennlp.org/), respectively.

neural-style-20190610


An implementation of [neural style][paper] in TensorFlow.

neuraltalk2-master-20160416


This is an early code release that works great but is slightly hastily released and probably requires some code reading of inline comments (which I tried to be quite good with in general). I will be improving it over time but wanted to push the code out there because I promised it to too many people.

newspaper-20200729


"Newspaper is an amazing python library for extracting & curating articles." -- `tweeted by`_ Kenneth Reitz, Author of `requests`_

ngpaws-20200216


ngPAWS (pronunced n-g-paws) is an authoring system based on the Professional Adventure Writing System, thus the name ngPAWS stands for "next generation PAWS".

nl2bash-20210524


This repository contains the data and source code release of the paper: [NL2Bash: A Corpus and Semantic Parser for Natural Language Interface to the Linux Operating System](http://victorialin.net/pubs/nl2bash.pdf).

nlp-lotr-20210529


A lot of these names were places, and many were of little importance or were not proper nouns at all, so only the first 39 names and 27 places were kept, in `names-edited.txt` and `places-edited.txt`.

nlprolog-20200801


This is an implementation of [NLProlog](todo), a method for approaching Question Answering tasks with Prolog-like reasoning over natural language statements.

nomicmu-20200218


NomicMU is an online system for multiplayer interactive fiction where every player may change how the game is played.

notably-20191116


The initial code is based on Yakuake which is a drop down terminal emulator based on KDE Konsole technology.

nous-20190618


# NOUS: Construction, Querying and Reasoning in Dynamic Knowledge Graphs Automated construction of knowledge graphs (KG) remains an expensive technical challenge that is beyond the reach for most enterprises and academic institutions. NOUS is an end-to-end framework for developing custom knowledge graphs driven analytics for arbitrary application domains. The uniqueness of our system lies A) in its combination of curated KGs along with knowledge extracted from unstructured text, B) support for advanced trending and explanatory questions on a dynamic KG, and C) the ability to answer queries where the answer is embedded across multiple data sources.

nous-kg-20190618


# NOUS : Construction and Querying of Dynamic Knowledge Graphs Automated construction of knowledge graphs remains an expensive technical challenge that is beyond the reach for most enterprises and academic institutions. NOUS is an end-to-end framework for developing custom knowledge graphs driven analytics for arbitrary application domains. The uniqueness of our system lies A) in its combination of curated KGs along with knowledge extracted from unstructured text, B) support for advanced trending and explanatory questions on a dynamic KG, and C) the ability to answer queries where the answer is embedded across multiple data sources.

npbehave-20200527


If you don't know anything about behavior trees, it's highly recommended that you gain some theory first, [this Gamasutra article](http://www.gamasutra.com/blogs/ChrisSimpson/20140717/221339/Behavior_trees_for_AI_How_they_work.php) is a good read.

nqthm-2


This is the `README' file for the 1998 distribution of Nqthm-1992, the Boyer-Moore prover. This distribution of Nqthm corresponds to the second edition of the book `A Computational Logic Handbook', Boyer and Moore, Academic Press, 1998, ISBN 0-12-122955-6. That book provides a comprehensive user's manual for this distribution, including installation instructions, a definition of the logic for which Nqthm-1992 is a theorem prover, documentation of all the user commands, and short introductions to the hundreds of sample input files, which cover many areas of computing and mathematics.

nupic-20180630


The Numenta Platform for Intelligent Computing (**NuPIC**) is a machine intelligence platform that implements the [HTM learning algorithms](http://numenta.com/learn/hierarchical-temporal-memory-white-paper.html). HTM is a detailed computational theory of the neocortex. At the core of HTM are time-based continuous learning algorithms that store and recall spatial and temporal patterns. NuPIC is suited to a variety of problems, particularly anomaly detection and prediction of streaming data sources.

oaa-2.3.2


A (possibly) more up-to-date or more complete version of this file may be found here: http://www.ai.sri.com/oaa/distribution/v2.3/2.3.2/documentation.html#release

ocropus-0.1.1


-------------------------------------------------------------------------------- Background -------------------------------------------------------------------------------- OCRopus is a state-of-the-art document analysis and OCR system, featuring * pluggable layout analysis, * pluggable character recognition, * statistical natural language modeling and * multi-lingual capabilities. OCRopus development is sponsored by Google and is initially intended for high-throughput, high-volume document conversion efforts. We expect that it will also be an excellent OCR system for many other applications.

odo-0.20


This is a pure Perl semantic web library that implements an RDF parser, RDQL, SPARQL query engine, persistent RDF datastore and an ontology framework for OWL and RDFS.

omega-0.9.5


Our current effort is the OMEGA successor ontology by Eduard Hovy. Using DINOmega to view the OMEGA Ontology DINOmega is the name of the browser with which you can explore OMEGA. OMEGA is a 120. DINOmega is a reimpelementation of ONTOSAURUS. Which was built at ISI by Ramesh Patil and Tom Russ. Each node is OMEGA represents one concept. (Many words in English have many senses: "shoe" is the thing you wear on your foot. The concepts are linked in a straightforward IS-A hierarchy. The top of the ontology is OB-THING.

one-20071221


The current code contains some natural language utilities, most interestingly a hyperresolution engine for FOL without equality

ontologymapping-cmsv-1.1


CMS source bundle contains all the Java source files in the directory structure and a build.xml file. If

open-sesame-20190516


A frame-semantic parser for automatically detecting [FrameNet](https://framenet.icsi.berkeley.edu/fndrupal/) frames and their frame-elements from sentences. The model is based on softmax-margin segmental recurrent neural nets, described in our paper [Frame-Semantic Parsing with Softmax-Margin Segmental RNNs and a Syntactic Scaffold](https://arxiv.org/abs/1706.09528). An example of a frame-semantic parse is shown below

open-sesame-20191016


A frame-semantic parser for automatically detecting [FrameNet](https://framenet.icsi.berkeley.edu/fndrupal/) frames and their frame-elements from sentences. The model is based on softmax-margin segmental recurrent neural nets, described in our paper [Frame-Semantic Parsing with Softmax-Margin Segmental RNNs and a Syntactic Scaffold](https://arxiv.org/abs/1706.09528). An example of a frame-semantic parse is shown below

open-type-20180728


This repository contains code for the following paper:

openccg-20190905


OpenCCG is a system for parsing and generating text using [combinatory categorial grammar](https://en.wikipedia.org/wiki/Combinatory_categorial_grammar) for syntax and [hybrid logic dependency semantics](https://www.aclweb.org/anthology/P02-1041) for, well, the semantic representation.

opencv-0.9.5


This library is mainly aimed at real time computer vision.

opencyc-2.0


Installation instructions for OpenCyc release 2.0.

opencyc-4.0


===================================================== Installation instructions for OpenCyc release 4.0 =====================================================

opendpi-20120715


OpenDPI is a software component for traffic classification based on deep packet inspection.

openephyra-20170320


This repository contains a resurrected and repaired version of OpenEphyra . It was branched from the latest version of OpenEphyra on SoundForge , as of March, 2014, for use in the OpenCog artificial intelligence system (Copyright (C) 2014 [OpenCog Foundation](http://www.opencog.org/)).

opennlp-0.9.0


In its previous life it was used to hold a common infrastructure code for the opennlp.grok project. The work previously done can be found in the final release of that project available on the main project page.

openprs-20191208


This README is somehow outdated. Please see this page for more update information, in particular with respect to installation wich is now quite easy using robotpkg.

openrouteservice-py-20181001


This comman group will install the library to your global environment. Also works in virtual environments.

opensubtitlesdownload-20170130


**OpenSubtitlesDownload.py** is a small Linux software written in python, built to help you **quickly find and download subtitles for your favorite videos**. It can be used as a nautilus script, or as a regular application working under GNOME or KDE desktop environments. You can also use it in full CLI mode (Command Line Interface) on your NAS, Raspberry Pi or wherever you want to bundle it really!

opentimelineio-20201211


OpenTimelineIO is an interchange format and API for editorial cut information. OTIO is not a container format for media, rather it contains information about the order and length of cuts and references to external media.

openwifimap-api-20180923


OpenWiFiMap is a database and map for free network WiFi routers (freifunk and others, too!).

opinionfinder-1.4


This file explains the polarity classifier and its MPQA and SGML output file formats.

opinionfinder-1.5


This file explains the subjective sentence classifiers and their MPQA and SGML output file formats.

opinionfinder-2.0


OpinionFinder is a system that processes documents and automatically identifies subjective sentences and sentiment expressions. It outputs files using inline SGML markup. The "Background" section gives a brief description of subjectivity and sentiment expressions.

optaplanner-distribution-8.7.0


To see the reference_manual, just open: reference_manual/html_single/index.html It contains information how to use it on your project (with Maven, Gradle, ...).

optic-20170706


This package contains OPTIC, a planner for use in problems where plan cost is determined by preferences or time-dependent goal-collection costs. For more details, see the paper "Temporal Planning with Preferences and Time-Dependent Continuous Costs", J. Benton, A. J. Coles, and A. I. Coles, ICAPS 2012.

optic-clp-20170706


This package contains OPTIC, a planner for use in problems where plan cost is determined by preferences or time-dependent goal-collection costs. For more details, see the paper "Temporal Planning with Preferences and Time-Dependent Continuous Costs", J. Benton, A. J. Coles, and A. I. Coles, ICAPS 2012.

optical-illusion-dataset-20210529


A greatly reduced dataset of only images that have eye-bending patterns is here (**569** images, hand picked):

org-brain-20190731


You can think of =org-brain= as a combination of a wiki and a mind map, where each wiki page / mind map node is an =org-mode= file which resides in your =org-brain-path=, or a headline with an ID property in one of those files. These are called /entries/. Entries can be linked together, and you can then view the network of links as a mind map, using =M-x org-brain-visualize=. Here's [[https://www.youtube.com/watch?v=3EGOwfWok5s&t=][a video introducing =org-brain=]].

ossert-20200725


#### Pulse, for last year/quarter/month (amount + delta from total) - Open and Closed Issues - Open and Merged PRs - Releases Count - Downloads divergence - Downloads degradation per release (will come later) - Stale Branches Count

otter-3.3


This will try to determine what kind of computer you are using and copy the appropriate binaries to the bin/ subdirectory. It will then run a few simple tests to see if the binaries are okay for your machine.

owlconverter-20120510


This is a perl script that will convert a DAML+OIL file to an OWL file. To use it, you must have perl and the CGI lib installed on your system. If you do not have CGI Lib on your system, instructions are provided below that will allow you to modify the script for your use.

oyster-3.1


Thank you for installing Oyster! This README file contains important information that you should read before using Oyster 3.1.

pandoc-20200205


Pandoc is a [Haskell] library for converting from one markup format to another, and a command-line tool that uses this library. It can convert *from*

parma-20150831


Note: There is a newer version of this codebase [here](https://github.com/hltcoe/parma2), and this should be considered deprecated.

parscit-110505


This software is copyrighted 2008,2009 by Min-Yen Kan, Isaac G. Councill, C. Lee Giles and Minh-Thang Luong. This program and all its source code is distributed are under the terms of the GNU General Public License (or the Lesser GPL).

parser-combinators-20210126


Makefile Soubor pro program make: make install ... instalace make doc ... vygeneruje dokumentaci make release ... provede kompletni clean a vytvori temer vsechny generovane casti prace znovu - EOF -

pcapdiff-0.1


This is the README file for pcapdiff 0.1, written November 2007.

pccoder-20190510


1. max_program_len dictates the maximum depth of the search. 2. The result file has a json dictionary line for each program predicted. The dictionary contains the predicted program and some details about the search, like the amount of time the search took and the final beam size. 3. Use --search_method to change the method from the default CAB search to DFS.

pcwin-15.5


This is an experimental version of Poplog 15.5 for use with Microsoft Windows 95 and Windows NT 3.51/4.0.

pddl-prolog-parser-20160714


THIS is a collection of scripts that overwrites PDDL 3.0 files to prolog friendly syntax.

pellet-1.5.1


Pellet is an open-source Java based OWL-DL reasoner. It can be used

pen.el-20210625


*** Modes **** Prompt-Engineering Minor Mode =prompt-engineering-mode= is a global minor mode for emacs that provides keybindings for creating and executing prompts generally across emacs.

perl-ldap-0.39


******************************************************************************* This code should be considered very much as work-in-progress. Any part of this release could be subject to change.

perthon-0.1


- Perthon is a Perl module and is written entirely in Perl 5.x. It is platform independent. - Perthon uses the Damian Conway's Parse::RecDescent Perl module (http://search.cpan.org/~dconway/Parse-RecDescent/) for language parsing. - Perthon reimplements the Python language as specified in the Python Reference Manual and BNF grammar (http://www.python.org/doc/current/ref/ref.html). - Perthon allows Python code to be run on the Perl 5.x interpreter, which is similar to how Jython (www.jython.org) reimplements Python on the JVM, except that Perthon works at the source code (not byte code) level. Jython is more analogous to the work underway to reimplement Python on the Parrot virtual machine(http://www.parrotcode.org/) that it is to Perthon. - Perthon does the reverse of Bridgekeeper (http://www.crazy-compilers.com/bridgekeeper/), which attempts the (much harder) problem of Perl to Python source code machine translation.

pifuhd-20200622


This repository contains a pytorch implementation of "Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization".

piranha-20180617


Piranha is a C++11-based computer algebra library for the manipulation of algebraic objects, such as polynomials and Poisson series, commonly encountered in celestial mechanics.

piranha-20200120


Piranha is a C++11-based computer algebra library for the manipulation of algebraic objects, such as polynomials and Poisson series, commonly encountered in celestial mechanics.

pl-6.6.6


Windows is a different matter. The GMP people state it is too much trouble making a Windows version. There are several ways to get the official sources compiled, notably using MinGW [1]. An easier alternative is to use the fork created by the MPIR project, which can be found at http://mpir.org/.

planning-features-20200806


This project intends to be the most comprehensive and robust platform possible for extracting scalar features from PDDL domains and problem instances for AI planning problems.

plcop-20210601


This project makes use of two external repositories:

plopengl-0.6.2


What is plOpenGL ---------------- plOpenGL is an open source project that aims to develop a complete cross-platform SWI-Prolog binding for the OpenGL, GLU and GLUT libraries.

plopengl-20171204


What is plOpenGL ---------------- plOpenGL is an open source project that aims to develop a complete cross-platform SWI-Prolog binding for the OpenGL, GLU and GLUT libraries.

pokepong-20080311


You are reading this because you have obtained a Pokpong package, either in binary or source form. It is a game.

polygames-20210128


This README is a work in progress, please feel very free to post issues - we are happy to help. Save up computational power: you can find checkpoints here: http://dl.fbaipublicfiles.com/polygames/checkpoints/list.txt (feel free to open an issue for discussing which checkpoint you should use for which game/problem!).

popf2-201107


This directory contains the planner POPF2. The original incarnation of POPF is described in the ICAPS 2010 paper "Forward-Chaining Partial-Order Planning." by Amanda Coles, Andrew Coles, Maria Fox and Derek Long. This version extends POPF by introducing any-time search, allowing it to optimise solution quality.

powerloom-4.0.10


This means that you will need approximately 100 MB to work with one Lisp, one C++ and one Java version of PowerLoom in parallel. If you also want to experiment with the Lisp translation variant that uses structures instead of CLOS instances to implement STELLA objects, then you will need an extra 16 MB to compile that.

ppcg-2.0.16


Prolog+CG is a Java implementation of Prolog, with extensions for

predicting-diseases-from-symptoms-20201220


This is an attempt to predict diseases from the given symptoms. A decision tree was trained on two datasets, one had the scraped data from [here](http://people.dbmi.columbia.edu/~friedma/Projects/DiseaseSymptomKB/index.html).

predicting-human-card-selection-in-magic-the-gathering-with-contextual-preference-ranking-20210526


This will run the whole training for one epoch and regularly output the current progress, while saving the network.

prism-4.0.2


This is PRISM (Probabilistic Symbolic Model Checker).

procedural-extraction-20210424


This code provides a framwork for extracting procedural information from documents. Please refer to our ACL paper ([arXiv](https://arxiv.org/abs/1906.11384)) for further descriptions.

project-codenet-20210511


A decade ago, Marc Andreessen [famously wrote](https://a16z.com/2011/08/20/why-software-is-eating-the-world/) that "software is eating the world." Software now permeates every part of our existence; Google services combine for [2 billion lines of code](https://www.wired.com/2015/09/google-2-billion-lines-codeand-one-place/), and a modern vehicle [contains around](https://www.technologyreview.com/2012/12/03/181350/many-cars-have-a-hundred-million-lines-of-code/) 100 million lines of code. It's a monumental challenge to create, debug, maintain, and update these complex software systems. Recently, a fast-growing discipline known as AI for Code aims to help software developers improve their productivity by automating the software engineering process. AI for Code researchers have been leveraging technologies like NLP and augmenting them with code analysis and compilation techniques to perform a myriad of practical tasks, such as code search, summarization, and completion, as well as code-to-code translation. The discipline isn't limited to academic research either: Ruchir Puri, IBM Research's chief research scientist, discussed in a recent [podcast](https://open.spotify.com/episode/7gHPbVBHEgSdrACTow7Gql) how technologies from AI for Code are being used to modernize legacy software by helping migrate monolithic applications to microservices for IBM's enterprise clients.

prolog-0.3.1


This will configure the ``root`` logger for the default level ``logging.INFO`` and set up two handlers: a colorized, console streaming handler, as well as a file handler set to log to the default file - ``pypro.log`` - in the main app's directory.

prolog-analyzer-20210122


A static analyzing tool for Prolog written in Clojure and Prolog. The tool uses specs for predicates based on [plspec](https://github.com/wysiib/plspec) to find errors statically.

prolog-checkers-20190831


A Player vs AI game of checkers implemented in Prolog.

prolog-pddl-3-0-parser-20140825


THIS is a collection of scripts that overwrites PDDL 3.0 files to prolog friendly syntax.

prolog-scheduling-problem-20200120


This project is part of the course Declarative Programming taught at Vrije Universiteit Brussel. It can be executed by running the _swipl_ program in the directory of this project. SWI-Prolog is available [here](http://www.swi-prolog.org/). First, one of the instances should be loaded. This can be done by one of the following commands:

prolog-starter-code-20160208


The general game playing (GGP) starter code is a basic general game playing system (see http://www.general-game-playing.de for explanations) that only plays legal moves. It can be easily extended with an own strategy.

prolog-to-minizinc-20200205


This is the compiler's output:

prolog-yamltiny-master-20160504


A YAML subset parser for Prolog. The subset of YAML was partially taken from http://search.cpan.org/~adamk/YAML-Tiny-1.51/lib/YAML/Tiny.pm#YAML_TINY_SPECIFICATION

proofnumber-search-20190729


## Proof-Number Search Proof-Number search (PNS) is a best-first tree search algorithm applied to determine the definite value of AND/OR trees. PNS does not require domain knowledge, only terminal positions need to be recognized. PNS can be used to solve games and endgame positions.

propbank-20170112


This directory contains the data of the UPenn Propbank. This data is collected as an additional layer of annotation on the Penn Treebank, representing the predicate argument structure of verbs. Below is a list of each file and a description of its contents.

propbank-release-20170112


This release updates the annotations for Ontonotes data and the English Web Treebank. An additional 160,000 predicates of data has been annotated in the BOLT corpora, and will be made public when LDC releases BOLT to the general catalog. This will also host other English Propbank annotations whenever we are able to post them.

pset-1.01


PSET is a software package for evaluating page segmentation algorithms. It has two major functions: automatically training page segmentation algorithms on a given training dataset, and testing page segmentation algorithms on a given test dataset.

pttp-20210505


PTTP is a theorem-prover for the first-order predicate calculus that uses the model elimination inference procedure and iterative deepening search. Input formulas are compiled by PTTP for direct execution by Prolog, so individual inference operations are fast.

puck-20180520


Puck is a high-speed, high-accuracy parser for natural languages. It's (currently) designed for use with grammars trained with the Berkeley Parser and on NVIDIA cards. On recent-ish NVIDIA cards (e.g. a GTX 680), around 400 sentences a second with a full Berkeley grammar for length <= 40 sentences.

puzzles-20200610.9aa7b7c


This is the README accompanying the source code to Simon Tatham's puzzle collection. The collection's web site is at .

pvslib-20170416


This version of the NASA PVS Library includes [Hypatheon](http://shemesh.larc.nasa.gov/people/bld/hypatheon.html). Hypatheon is a database utility that provides a capability for indexing PVS theories and making them searchable via a GUI client.

py-trees-20200509


PyTrees is a python implementation of behaviour trees designed to facilitate the rapid development of medium sized decision making engines for use in fields like robotics. Brief feature list:

pyhop-20201021


Pyhop is a simple HTN planner written in Python. It works in both Python 2 and 3.

pyke-1.1.1


This is published under the MIT License. The copyright and license are in the file "LICENSE" in the source directory.

pyrrhus-20171214


A value-optimizing planning system

python-kasa-20210527


python-kasa is a Python library to control TPLink smart home devices (plugs, wall switches, power strips, and bulbs) using asyncio. This project is a maintainer-made fork of [pyHS100](https://github.com/GadgetReactor/pyHS100) project.

pytodoist-20170227


**PyTodoist** is a Python package for interacting with `Todoist `_. It hides the underlying API calls with higher-level abstractions that make it easy to use Todoist with Python.

qanus-20191124


This file is a quick reminder to ways to run QANUS.

qfsm-0.52.0


About ----- Qfsm is a graphical tool for designing finite state machine. It is written in C++ using the Qt library. Features include:

qgrep-20190827


qgrep is an implementation of grep database, which allows you to perform grepping (i.e. full-text searches using regular expressions) over a large set of files. Searches use the database which is a compressed and indexed copy of the source data, thus they are much faster compared to vanilla grep -R.

quantor-3.2


This is the source code of Quantor a QBF solver described in:

quickcheck-swipl-0.2.3


A [detailed tutorial](http://blog.ndrix.com/2013/12/quickcheck-for-prolog.html) is available.

r6homeinventory-2.2


R6HomeInventory will be generated in the release folder and can either be ran there or moved to another location.

radare2-20170205


r2 is a rewrite from scratch of radare in order to provide a set of libraries and tools to work with binary files.

rareqs-1.1


RAReQS is a solver for Quantified Boolean Formulas (QBFs). The solver tackles the given formula recursively using counterexample abstraction refinement (CEGAR). More details can be found in our SAT 2012 paper [1]. While the RAReQS algorithm [1] is applicable to any QBF in the prenex form, the current implementation supports only the QDIMACS format.

rbt-1.14


This program was written at the Department of Computer and Information Science, University of Pennsylvania, and the Spoken Language Systems Group, Laboratory for Computer Science, MIT.

reasoning-smem-soar-20210511


This is a baseline implementation. General use cases could guide restrictions that still permit tractible inference. See the slides for more conclusions.

rebel-20201209


Implementation of [ReBeL](https://arxiv.org/abs/2007.13544), an algorithm that generalizes the paradigm of self-play reinforcement learning and search to imperfect-information games. This repository contains implementation only for [Liar's Dice](https://en.wikipedia.org/wiki/Liar%27s_dice) game.

receipt-parser-20180902


Updating your housekeeping book is a tedious task: You need to manually find the shop name, the date and the total from every receipt. Then you need to write it down. At the end you want to calculate a sum of all bills. Nasty. So why not let a machine do it?

recipe-interpretation-20190905


# Recipe Interpretation This repository contains the code for [*Mise en Place*: Unsupervised Interpretation of Instructional Recipes](http://homes.cs.washington.edu/~yejin/Papers/emnlp15_cooking.pdf) by Chloe Kiddon, Ganesa Thandavam Ponnuraj, Luke Zettlemoyer, and Yejin Choi.

recorder-20190103


The _OwnTracks Recorder_ is a lightweight program for storing and accessing location data published via [MQTT](https://mqtt.org/) (or HTTP) by the [OwnTracks](http://owntracks.org) apps. It is a compiled program which is easy to install and operate even on low-end hardware, and it doesn't require an external database.

redshift-1.12


which contains x,y chromaticities as well as decimal and integer/hex RGB data. Unfortunately, the decimal values used for Redshift are not gamma-corrected while the others are. The gamma correction is part of the sRGB specifications and is described in detail at http://en.wikipedia.org/wiki/SRGB. It can roughly be approximated by a power law with an exponent gamma about 2.2. Omitting this correction results in exaggerated color values. A minor issue concerns the standard whitepoints which are slightly off the Planckian locus. In particular, D65 (which corresponds to maximized RGB=1,1,1 in sRGB) contains slightly more green than 6500 K blackbody color. The developers of Redshift solved this by rescaling the RGB values to match 1,1,1 at 6500 K. This, however, leads to slightly incorrect colors.

relationfactory-20140521


RelationFactory is a relation extraction and knowledge-base population system. It was the top-ranked system in TAC KBP 2013 English Slot-filling (http://www.nist.gov/tac/2013/KBP/index.html). If you want to use RelationFactory in a TAC benchmark, please contact the authors (see LICENSE for details). RelationFactory uses SVMLight (http://svmlight.joachims.org/) for classification, so you must agree to the License of SVMLight, especially to it being restricted to scientific use only.

relationfactory-20140930


RelationFactory is a relation extraction and knowledge-base population system. It was the top-ranked system in TAC KBP 2013 English Slot-filling (http://www.nist.gov/tac/2013/KBP/index.html). If you want to use RelationFactory in a TAC benchmark, please contact the authors (see LICENSE for details). RelationFactory uses SVMLight (http://svmlight.joachims.org/) for classification, so you must agree to the License of SVMLight, especially to it being restricted to scientific use only.

relex-0.8.5


RelEx is a syntactic relationship extractor; it will parse English language sentences and return the relationships between different parts of the sentence.

remem-2.08


This should analyze your system, and then make appropriate binaries of ra-index and ra-retrieve. If you have trouble, make sure you are using the GNU version of make ("make --version" should produce something sensible). It might also be called "gmake" on your system. Once the compilation is finished, the code you will need is:

resolution-theorem-prover-20180405


A resolution theorem prover written in Lisp for UMaine's COS470: Artificial Intelligence course.

reverb-1.0


ReVerb is a program that automatically identifies and extracts binary relationships from English sentences. ReVerb is designed for Web-scale information extraction, where the target relations cannot be specified in advance and speed is important.

risec-20210507


This dataset contains 260 cooking recipe texts which are the same as [CURD](https://www.cs.cmu.edu/~ark/CURD/) and [SIMMR](https://camel.abudhabi.nyu.edu/simmr/). The corpus development is detailed in [our short paper](https://www.aclweb.org/anthology/2020.aacl-main.82). If our work contributes to your research, please cite the paper. ``` @inproceedings{jiang-etal-2020-recipe, title = "Recipe Instruction Semantics Corpus ({RIS}e{C}): {R}esolving Semantic Structure and Zero Anaphora in Recipes", author = "Jiang, Yiwei and Zaporojets, Klim and Deleu, Johannes and Demeester, Thomas and Develder, Chris", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.aacl-main.82", pages = "821--826"} ```

rits-20200911


Please solve: 1/2 + 3/4 |: 4/6. This is wrong. You cannot just sum the numerators when the denominators are different! Let us first find a common multiple of 2 and 4! Please enter a common multiple of 2 and 4: |: 2. This is wrong. 2 is no common multiple of 2 and 4, since 2 is not divisible by 4! So, let's try again! Please enter a common multiple of 2 and 4: |: 3. This is wrong. 3 is not a common multiple of 2 and 4, since 3 is not divisible by 2! So, let's try again! Please enter a common multiple of 2 and 4: |: 5. This is wrong. I see you are having a hard time with this. Hint: 2 * 4 = 8 is a possible solution. So, let's try again! Please enter a common multiple of 2 and 4: |: 8. Good, the solution is correct. There is also a smaller solution! Now apply this knowledge to the original task! Please solve: 1/2 + 3/4 |: 10/8. Good, the solution is correct, but not minimal. Please cancel common divisors in: 10/8 |: 1/4. This is wrong! Unfortunately, I cannot give any useful hints here. So, let's try again! Please cancel common divisors in: 10/8 |: 5/0. The denominator of a fraction cannot be 0. So, let's try again! Please cancel common divisors in: 10/8 |: 5/4. Good, the solution is correct and also minimal. Very nice! the interaction history: [solve(1/2+3/4),internal(1/2+3/4=4/6),solve(cm(2,4)),internal(cm(2,4)=2),solve(cm(2,4)),internal(cm(2,4)=3),solve(cm(2,4)),internal(cm(2,4)=5),solve(cm(2,4)),internal(cm(2,4)=8),solve(1/2+3/4),internal(1/2+3/4=10/8),solve(cancel(10/8)),internal(cancel(10/8)=1/4),solve(cancel(10/8)),internal(cancel(10/8)=5/0),solve(cancel(10/8)),internal(cancel(10/8)=5/4)] true.

rltk-20200801


The Record Linkage ToolKit (RLTK) is a general-purpose open-source record linkage platform that allows users to build powerful Python programs that link records referring to the same underlying entity. Record linkage is an extremely important problem that shows up in domains extending from social networks to bibliographic data and biomedicine. Current open platforms for record linkage have problems scaling even to moderately sized datasets, or are just not easy to use (even by experts). RLTK attempts to address all of these issues.

rogomatic-2.0.2


This version of of Rog-O-Matic is based upon revision 22 of:

rogueutils-20200229


A small collection of utilities for making roguelikes

rosetta-02


make inferences off the asserted data, but it is a good start, and OpenCyc

rosette-20190729


[Rosette](http://emina.github.io/rosette/) is a solver-aided programming language that extends [Racket](http://racket-lang.org) with language constructs for program synthesis, verification, and more. This repository includes the source code for Rosette, as well as several example solver-aided DSLs.

rotten-imdb-20080706


This README v2.0 (June, 2004) for the v2.0 polarity dataset comes from

rpi-software-nameclustering-1.0.0


#change following accordingly #change FILELIST to the file in which each line is a relative path (relative to SRC_DIR) for each source document FILELIST=/m3/KBP/corpus/japan.corpus/filelist

rtec-20190103


RTEC is an extension of the [Event Calculus](https://en.wikipedia.org/wiki/Event_calculus) that supports highly-scalable stream processing. It is written in Prolog and has been tested under [YAP 6.2](http://www.dcc.fc.up.pt/~vsc/Yap/).

rtec-swi-20190114


A preliminary account of the CAVIAR event description may be found at: http://users.iit.demokritos.gr/~a.artikis/publications/eimm10-artikis.pdf

rtl-433-20170709


This software is mostly useable for developers right now.

rtl-433-20180416


This software is mostly useable for developers right now.

rtl-433-20181231


rtl_433 (despite the name) is a generic data receiver, mainly for the 433.92 MHz, 868 MHz (SRD), 315 MHz, and 915 MHz ISM bands.

rtl-433-20200506


rtl_433 (despite the name) is a generic data receiver, mainly for the 433.92 MHz, 868 MHz (SRD), 315 MHz, 345 MHz, and 915 MHz ISM bands.

rtl-entropy-20170731


This software has been tested on debian linux 7.1, but should work on any linux distribution, and might run on OS X and other POSIX compliant operating systems.

rudibugger-20191115


A video demonstrating rudibugger can be found [here](https://youtu.be/nSotEVZUEyw).

ruletaker-20210315


This repo contains tools and utilities to: 1. Generate datasets of theories and assertions meant to test the logical reasoning capabilities of a model. For details see the paper [Transformers as Soft Reasoners over Language](https://arxiv.org/abs/2002.05867). 2. Run existing theories through a theorem proving engine to obtain labels.

runtime-20190521


This project contains the GOAL runtime (standalone)

safehouse-20200722


Safehouse is a __headless__ (I didn't write any js or templates), __developer-focused__ (you config it by editing the source code), __scale-invariant__ (it only has one user) django server. You text it or (eventually) email it codewords and parameters, and it does stuff. Like send you a joke. Or text a bunch of your friends saying you're having a serious mental episode and need to talk to someone _right now_ before you cut off your hands.

sapareplan-20191028


This repository contains the code to deploy and run the Sapa Replan planner (http://rakaposhi.eas.asu.edu/kartik-dissertation.pdf), which derives from the Sapa codebase.

sbagen-1.4.5


Here is a brief intro to some of the files here:

sbcg-20210501


This is a proof-of-concept implementation of a (very!) small fragment of an English Sign-Based Construction Grammar, adapted to adhere to classic CxG assumptions. The grammar is implemented in ProFIT, a Prolog extension with Features, Inheritance, and Templates originally developed by Gregor Erbach (Universitaet des Saarlandes) in 1994. The present version of ProFIT has been ported to modern SICStus Prolog (3.8 or higher) by Mats Carlson. None of these individuals have any knowledge of the present project or share any of the blame for any of its shortcomings.

sbcl-1.5.3


The system is a work in progress. See the "TODO" file in the source distribution for some highlights.

sciknowmineproject-20200731


* [triageServer](https://github.com/BMKEG/triageServer) generates the web archive (*.war) file that runs on a web application container (such as Jetty, Tomcat, Glassfish, etc). * [skmTriage](https://github.com/BMKEG/skmTriage) contains the server-side logic for all administrative commands to generate, populate and edit the underlying database * [triageClientApp](https://github.com/BMKEG/triageClientApp) generates the *.swf file for the Flex web-application * [triageClientComponents](https://github.com/BMKEG/triageClientComponents) generates the *.swc library containing all the logic of the triageModule Flex component. * [skmCore](https://github.com/BMKEG/skmCore) provides a basic layer on top of the digitalLibrary for other text mining applications using UIMA. * [digitalLibraryDao](https://github.com/BMKEG/digitalLibraryDao) provides a data access to the system for base citaiton and document functions. * [lapdftext](https://github.com/BMKEG/lapdftext) is the core library for manipulating PDF documents. * [lapdftextVpdmf](https://github.com/BMKEG/lapdftextVpdmf) links the lapdftext library to the VPDMf framework via the FTD model. * [bmkeg-as-parent](https://github.com/BMKEG/bmkeg-as-parent) manages maven meta-data for AS projects. * [bmkeg-parent](https://github.com/BMKEG/bmkeg-parent) manages maven meta-data for Java projects.

scone-20200801


Scone is a knowledge representation and reasoning system – a knowledge-base system or KBS – that has been developed by Scott Fahlman’s research group in the Language Technologies Institute of Carnegie Mellon University. Scone, by itself, is not a complete AI or decision-making system, and does not aspire to be; rather, it is a software component – a sort of smart active memory system – that is designed to be used in a wide range of software applications, both in AI and in other areas. Scone deals just with symbolic knowledge. Things like visualization, motor memory, and memory for sound sequences are also important for human-like intelligence, but we believe that those will have specialized representations of their own, linked in various ways to the symbolic memory.

scoot-dec-18.2007


This package contains a compiled version of Scoot for Linux, a compiled version for Cygwin, the source files of an AES core that is used for benchmarking, and some additional regressions tests.

scoot-jan-23.2008


This package contains a compiled version of Scoot for Linux, a compiled version for Cygwin, the source files of an AES core that is used for benchmarking, and some additional regressions tests.

scoot-ra-apr-7.2008


This package contains a version of Scoot for Linux that performs race analysis using predicate abstraction.

scoot-ra-jul-29.2008


This package contains a version of Scoot for Linux that performs race analysis using predicate abstraction.

scrapely-0.13.4


Scrapely is a library for extracting structured data from HTML pages. Given some example web pages and the data to be extracted, scrapely constructs a parser for all similar pages.

screenshot-redaction-20210410


## How Redaction Works The redaction process is currently mostly static and fairly simple. In the future the process will be more flexible allowing submission of photos for processing or even regions of photos. The process initially uses Tesseract OCR to find words inside the image. Once this process is finished, users are notified of completion. If a user chooses to view the redactions, the currently enabled word dictionaries are applied to the results. Dictionaries can choose to white list or black list with their own internal rules. The end result is a screenshot with zero or more words wrapped in boxes and blacked out.

sde-20180625


Structured Data Extractor (SDE) is an implementation of DEPTA (Data Extraction based on Partial Tree Alignment), a method to extract data from web pages (HTML documents). DEPTA was invented by Yanhong Zhai and Bing Liu from University of Illinois at Chicago and was published in their paper: "Structured Data Extraction from the Web based on Partial Tree Alignment" (IEEE Transactions on Knowledge and Data Engineering, 2006). Given a web page, SDE will detect data records contained in the web page and extract them into table structure (rows and columns).

search-engine-20200601


Approach0 is a math-aware search engine.

second-brain-20210218


A curated list of awesome Public Zettelkastens 🗄️ / Second Brains 🧠 / Digital Gardens 🌱

secret-bridge-20200514


A bridge to help increase your ability to detect secrets shared on Github.

selenium-server-deb-package-20170308


This project is meant to automate debian package for selenium-server It will automatically download selenium-server from google code file repository and package it with init.d scripts.

self-dialogue-corpus-20191118


# The Self-dialogue Corpus This is an early release of the Self-dialogue Corpus containing 24,165 conversations, or 3,653,313 words, across 23 topics. For more information on the data, please see [our corpus paper](https://arxiv.org/pdf/1809.06641.pdf) or [our submission to the Alexa Prize](http://alexaprize.s3.amazonaws.com/2017/technical-article/edina.pdf).

semafor-2.1


This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

semafor-20171112


This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

semagrams-20200626


A *semagram* is a flexible structure for encoding the semantics of a given concept via a slot-filler structure.

semediawiki-1.0.1


Semantic MediaWiki is a project for extending MediaWiki with "semantic" functions that enable machine-reading of wiki-content. For details and further links, see http://semantic-mediawiki.org

semeval2018-task4


This will produce the following output files, saved in the directory models/semeval-winning-model/answers/friends_test_scene/ :

semeval2020-task11-20201112


- `configs`: yaml configs for the system - `datasets`: contains the task datasets, which can be downloaded from the team competition webpage - `results`: the folder for submissions - `span_identification`: code for the task SI - `ner`: pytorch-transformers RoBERTa model with CRF (end-to-end) - `dataset`: the scripts for loading and preprocessing source dataset - `submission`: the scripts for obtaining and evaluating results - `technique_classification`: code for the task TC (the folder has the same structure as `span_identification`) - `tools`: tools provided by the competition organizers; contain useful functions for reading datasets and evaluating submissions - `visualization_example`: example of visualization of results for both tasks

sempre-20200731


A semantic parser maps natural language utterances into an intermediate logical form, which is "executed" to produce a denotation that is useful for some task.

senselearner-2.0


SenseLearner is a system that attempts to disambiguate all open class words in any given text. It can be thought of as a minimally supervised WSD algorithm, in that it uses a small data set for training purposes. The algorithm does not need a separate classifier for each word to be disambiguated, but instead it learns global models for word categories. The current distribution comes with four models - for the various parts of speech. The implementation is however meant to be flexible, so that new models can be easily implemented and added to SenseLearner.

seq-opt-dynamic-gamer-2.0


The directory 'JavaBDD' contains the sources taken from the sourceforge-project (slightly extended to enable CUDD to store BDDs on disk). The original version can be found in the web at 'http://javabdd.sourceforge.net/'. The most recent version, 2.0, is in the subversion-repository, from where we also got the jdd.jar package.

servo-platform-20200202


This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

sflux-2.0


FLUX is a high-level programming system for cognitive agents of all

sgp-1.0


This is Sensory Graphplan (SGP), release 1.0h.

sgp-20160811


This is Sensory Graphplan (SGP), release 1.0h.

sh-20191003


A shell parser, formatter, and interpreter. Supports [POSIX Shell], [Bash], and [mksh]. Requires Go 1.12 or later.

shaken-3.0


The HPKB effort showed that it is possible to create KBs by reusing the content of knowledge libraries. It was acknowledged that the ability of a subject matter expert (SME) to directly enter knowledge is essential to improve the KB construction rates. The goal of the Rapid Knowledge Formation (RKF) project is to explore and create innovative techniques for SMEs to directly enter knowledge. The SRI team is developing a system for direct knowledge entry by SMEs as an integrated team of technology developers.

sheep-20210418


2. USAGE: ./scripts/dist-partition.sh [options... -o $OUTPUT_FILE] $GRAPH $NUM_PARTITIONS $GRAPH may be a .net (SNAP) or a .dat (XSS/Graph500 binary) file. There is a snap2xss conversion utility in llama/utils By default, $GRAPH = test/hep-th.dat and $NUM_PARTITIONS = 2 If $NUM_PARTITIONS = 0, then we skip the partitioning phase.

shellcheck-20210202


ShellCheck is a GPLv3 tool that gives warnings and suggestions for bash/sh shell scripts:

sherlock-20200406


The following is an example of the command line to run all the tests for Sherlock. This invocation hides the progress text that Sherlock normally outputs, and instead shows the verbose output of the tests.

shinycms-20170804


ShinyCMS is an open source CMS built in Perl using the Catalyst framework.

shop3-20190605


This repository contains the open source version of the SHOP3 planner.

sigma-2.02


This file is a text-only version of the Installation Instructions

simgen-20190531


SimGen is a simulation language, originally created by Simularity, Inc.

simgen-20200202


SimGen is a simulation language, originally created by Simularity, Inc.

simgen-20200207


SimGen is a simulation language, originally created by Simularity, Inc.

simmr-data-v1.0


This data set is provided for research purposes only. If interested in commercial use, please contact both authors to connect you to the NYUAD technology transfer office.

simp-isar-mode-20210412


This is a very shitty Emacs mode for **basic** displaying and editing of Isabelle files (.thy) the idea is to avoid opening a fully fledged JEdit for trivial stuff.

sitcalc-20200418


SitCalc is a framework for managing state in an application without mutation based on situation calculus.

situations-20200419


This repository provides the top-level definition for interpretations of Situations in Logtalk.

sling-20190320


The SLING project is still work in progress. We do not yet have a full system that can extract facts from arbitrary text, but we have built a number of the subsystems needed for such a system. The SLING frame store is our basic framework for building and manipulating frame semantic graph structures. The [Wiki flow pipeline](doc/guide/wikiflow.md) can take a raw dump of Wikidata and [convert](doc/guide/wikiflow.md#wikidata-import) this into one big frame graph. This can be loaded into memory so we can do fast graph traversal for inference and reasoning over the knowledge base. The Wiki flow pipeline can also take raw Wikipedia dumps and [convert](doc/guide/wikiflow.md#wikipedia-import-and-parsing) these into a set of documents with structured annotations extracted from the Wiki markup. This also produces [phrase tables](doc/guide/wikiflow.md#name-and-phrase-tables) that are used for mapping names to entities. There is a [SLING Python API](doc/guide/pyapi.md) for accessing all this information and we also have a [bot](python/wikibot) for uploading extracted facts to Wikidata.

smem-question-answering-20210511


This work includes data from NextKB, which was compiled by the Qualitative Reasoning Group at Northwestern University. NextKB is freely available under the Creative Commons Attribution 4.0 license from http://qrg.northwestern.edu/nextkb/index.html. The included data was created by contributors to the Qualitative Reasoning Group, contributors to Cycorp's OpenCyc, University of California at Berkeley's FrameNet project, the VerbNet project, and Princeton University's WordNet project. For details of attributions, please see http://www.qrg.northwestern.edu/nextkb/license.html

snowman-20170205


http://derevenets.com/[Snowman] is a native code to C/C++ decompiler, supporting x86, AMD64, and ARM architectures. You can use it as a standalone GUI application, command-line tool, IDA plug-in, or a library. Snowman is link:doc/licenses.asciidoc[free software].

soapui-4.0.1


soapUI 3.5.1 is mainly a bug-fix release with dozens of minor improvements and

soarsuite-9.3.1


This release of Soar continues the 9.3 line which includes modules for reinforcement learning (RL), episodic memory (EpMem), and semantic memory (SMem), as well as everything from previous versions of Soar. All learning mechanisms are disabled by default. This release is primarily a bug fix release.

soarsuite-9.6.0


Welcome to Soar! Soar 9.6.0 is the current, stable version of Soar. It is the first major release in a few years and includes six key new features and hundreds of important bug fixes and code improvements:

soothsayer-0.6.2


This document will guide you through the steps required to configure and build the Soothsayer system and libraries. You should be ready to run Soothsayer in a few minutes.

spacemacs-master-20210630


Spacemacs is a new way to experience Emacs -- a sophisticated and polished set-up focused on ergonomics, mnemonics and consistency.

spade-2.2.1


Welcome to SPADE =========================== SPADE (Smart Python multi-Agent Development Environment) is a Multiagent and Organizations Platform based on the XMPP/Jabber technology and written in the Python programming language. This technology offers by itself many features and facilities that ease the construction of MAS, such as an existing communication channel, the concepts of users (agents) and servers (platforms) and an extensible communication protocol based on XML, just like FIPA-ACL. Many other agent platforms exist, but SPADE is the first to base its roots on the XMPP technology.

spade-20110710


SPADE (Sentence-level PArsing for DiscoursE) is a discourse parser at sentence level written by Radu Soricut at USC/ISI. You can find details about the approach implemented by SPADE in the paper:

speech-acts-classifier-20170726


An experiment with parsing natural language and classifying the [speech act](https://en.wikipedia.org/wiki/Speech_act) of the sentence. This is especially important when a machine is trying to understand the meaning of a sentence in an environment, like a chat session, where missing punctuation is common.

spejd-0.84


Spejd is a shallow parser, which allows for simultaneous syntactic parsing and morphological disambiguation, developed at the Institute of Computer Science, Polish Academy od Sciences, Warsaw.

spf-20170112


The framework contains an example experiment using the GeoQuery corpus. To use development fold 0 for testing, and training on the other folds, use: ``java -jar dist/spf-1.4.jar geoquery/experiments/template/dev.cross/dev.fold0.exp`` The log and output files are written to a newly generated directory in the experiment directory: ``geoquery/experiments/template/dev.cross/``

srlconll-1.1


This software is distributed to support the CoNLL-2005 Shared Task. It is free for research and educational purposes.

stanford-corenlp-20120409


This section summarizes changes between released versions of the suite.

stanford-corenlp-full-20140104


2014-01-04 3.3.1 Bugfix release

stanford-ner-200-6.09.18


This package provides a high-performance machine learning based named entity recognition system, including facilities to train models from supervised training data and pre-trained models for English.

stanford-ner-20080306


This package provides a high-performance machine learning based named entity recognition system, including facilities to train models from supervised training data and pre-trained models for English.

stanford-parser-20110627


This release prepared by John Bauer.

stanford-parser-20140827


This release prepared by John Bauer.

stanford-parser-full-2017-06-09


This release was prepared by Jason Bolton.

stefanrank-actaffactviewer-20190304


What's this? ============= ActAffAct is the product of the master's thesis of Stefan Rank. It is a small proof of concept program that extends a BDI architecture with an appraisal component. It tries to demonstrate the applicability of such an architecture to the area of emergent narrative.

stella-3.5.0


This means that you will need approximately 55 MB to work with one Lisp, one C++ and one Java version of STELLA in parallel. If you also want to experiment with the Lisp translation variant that uses structures instead of CLOS instances to implement STELLA objects, then you will need an extra 8 MB to compile that.

stet-20071125


This is an entirely preliminary, undocumented, unsupported release of stet. Files may be missing. Scatology may be unexpurgated. I don't have much time to help you with this right now. You need RT; we're using version 3.2. There are perl dependencies. There are unstated assumptions. But you asked for it. You got it.

stevedraper-ggp-base-60143e2


A simple Prover-based state machine implementation is included in GGP Base, so you don't need to worry about the details of converting a game description into a state machine. To write a gamer based on StateMachineGamer, derive your class from players.gamer.statemachine.StateMachineGamer. Applications like the PlayerPanel should automatically recognize your new class and it should appear in their lists of available players right away.

stockfish-6


Stockfish is a free UCI chess engine derived from Glaurung 2.1. It is not a complete chess program and requires some UCI-compatible GUI (e.g. XBoard with PolyGlot, eboard, Arena, Sigma Chess, Shredder, Chess Partner or Fritz) in order to be used comfortably. Read the documentation for your GUI of choice for information about how to use Stockfish with it.

strategic-tactical-pandora-20191127


The XML contains has one root node which contains three possible children:

stripstate-20200409


STRIPState is a framework for managing state in an application without mutation based on STRIPS and situation calculus.

superglus-20200216


Superglus is an interactive fiction (text adventures) authoring system strongly based on Professional adventure writing system.

swim-20210314


SWIM is a compact library that implements the basic functionality of [Genetic Programming (GP)](#fg), a popular stochastic approach to program synthesis. I developed its early version in the process of preparing my recent [book](#bps) on behavioral program synthesis using GP.

swirl-1.1.0


SwiRL is a Semantic Role Labeling (SRL) system constructed on top of the full syntactic analysis of text. The syntactic analysis is performed using Eugene Charniak's parser (included in this package). SwiRL trains one classifier for each argument label using a rich set of syntactic and semantic features. The classifiers are learned using one-vs-all AdaBoost classifiers, using Xavier Carreras' AdaBoost software (included in this package).

symptom-disease-20201220


This model is used to predict symptoms that are closely related to a given symptom. It can be used in cases (read apps) where the user enters a symptom, and a list of similar symptoms pop up, of which the user can select the ones he's suffering from, and these can be further fed into a model that can then predict the disease the person is suffering from, and redirect him to the associated specialist. The latter part isn't included here.

symptom-tree-20201220


This function reads and processes the data file, then initializes the SymptomTree class using this processed data. This class contains attributes for the DecisionTreeClassifier model (model), the cleaned NAMCS dataset (data), a dictionary mapping diagnoses to unique identifier codes (diagnosis dict), a dictionary mapping unique codes to diagnosis strings (rev_diagnosis_dict), the x training dataset (x_train), the y training dataset (y_train), the x testing dataset (x_test), the y testing dataset (y_test), predicted diagnoses (y_hat), and a lookup attribute.

sytora-20201220


# Sytora Sytora is a multilingual symptom-disease classification app. Translation is managed through the UMLS coding standard. A multinomial Naive Bayes classifier is trained on a handpicked dataset, which is freely available under CC4.0.

tagfsai-20071228


This program is an attempt to create a working algerithm that will be able to organize a hierarchy of tags based on the tag sets given to the program through a input file.

tarski-20210209


## What is Tarski Tarski is a framework for the specification, modeling and manipulation of [AI planning](https://en.wikipedia.org/wiki/Automated_planning_and_scheduling) problems. Tarski is written in Python and includes parsers for major modeling languages (e.g., [PDDL](https://en.wikipedia.org/wiki/Planning_Domain_Definition_Language), [FSTRIPS](https://dl.acm.org/citation.cfm?id=566359), [RDDL](https://en.wikipedia.org/wiki/Planning_Domain_Definition_Language#RDDL)), along with modules to perform other common tasks such as logical transformations, reachability analysis, grounding of first-order representations and problem reformulations.

tauchain-prolog-20200117


TML (Tau Meta-Language) is a variant of Datalog. It is intended to serve as a translator between formal languages (and more uses, see under the Philosophy section). The main difference between TML and common Datalog implementations is that TML works under the Partial Fixed-Point (PFP) semantics, unlike common implementations that follow the Well-Founded Semantics (WFS) or stratified Datalog. By that TML (like with WFS) imposes no syntactic restrictions on negation, however unlike WFS or stratified Datalog it is PSPACE complete rather than P complete. TML's implementation heavily relies on BDDs (Binary Decision Diagrams) in its internals. This gives it extraordinary performance in time and space terms, and allowing negation to be feasible even over large universes. In fact negated bodies, as below, do not consume more time or space than positive bodies by any means, thanks to the BDD mechanism.

tei-20190928


The [TEI](https://www.tei-c.org) is an international and interdisciplinary standard used by libraries, museums, publishers, and academics to represent all kinds of literary and linguistic texts, using an encoding scheme that is maximally expressive and minimally obsolescent.

tei-emacs-20190928


This is version 3 of the TEI-EMACS installation: a more or less complete SGML/XML authoring system, which combines GNU-Emacs with PSGML and a host of other relevant emacs customizations for writing and validating SGML or XML documents. Most XML-emacs subsystems have their own help system or documentation.

templater-0.4.0


- **Template**: the whole object (instance of ``Templater``). - **Document**: a string or file that have some kind of pattern. You'll use documents to make a template object learn and recognize these patterns, so later you can use the template object to parse a document and get only the information that is not "static". - **Blocks**: the fixed parts of a template. Can change (in number and size) when ``learn`` is run. - **Blanks**: also called holes or variables, blanks are the parts in a template that changes between documents with the same template. - **Template definition**: the information stored in a template that defines it (it is a Python list with a very simple grammar that describes how the template is composed). - **Markers**: when you want to save a template, something should be put between blocks to "mark" the blanks (so the template definition can be reconstructed later). - **Named marker**: a marker plus a header is called a named marker. They are handy and more legible since you can access the "blanks" by names instead of indexes.

temporal-planning-20200122


This documentation aims to explain how experiments with the planners introduced by [[Jiménez, Jonsson and Palacios, 2015]](#ref-tmp-planning-icaps15) and [[Furelos-Blanco, Jonsson, Palacios and Jiménez, 2018]](#ref-tmp-planning-coplas18) can be run.

tensorflow-rnn-events-prediction-20190706


# Tensorflow RNN to Events Prediction **[NOTE]**: *This notebook was made with [Tensorflow v.0.8.0](https://github.com/tensorflow/tensorflow/releases/tag/v0.8.0) and code is not compatible with the newest release of Tensorflow. For the moment I don't have time to upgrade the code so you can use notebook more as an illustration of GDELT dataset and time series analysis.*

terminus-server-20191120


TerminusDB is an open source model driven graph database for knowledge graph representation designed specifically for the web-age.

tesseract-2.01


Introduction ============ This package contains the Tesseract Open Source OCR Engine. Orignally developed at Hewlett Packard Laboratories Bristol and at Hewlett Packard Co, Greeley Colorado, all the code in this distribution is now licensed under the Apache License:

texco-0.1.3


This program is provided under the BSD license, contained at the file COPYING of the Texco sources.

texmacs-1.99.13


GNU TeXmacs is a free scientific text editor, which was both inspired by TeX and GNU Emacs. The editor allows you to write structured documents via a wysiwyg (what-you-see-is-what-you-get) and user friendly interface. New styles may be created by the user. The program implements high-quality typesetting algorithms and TeX fonts, which help you to produce professionally looking documents.

text-pair-0.9


This is version 0.9 of the Text::Pair module for the identification of textual reuse in large corpora. The contents are:

text-senseclusters-1.05


SYNOPSIS SenseClusters is a suite of Perl programs that supports unsupervised clustering of similar contexts. It relies on it's own native methodology, and also provides support for Latent Semantic Analysis.

textbelt-20170221


TextBelt Open Source is a REST API that sends outgoing SMS. It uses a free mechanism for sending texts, different from the more reliable paid version available at https://textbelt.com.

textmap


The systems produced withinTextMap focus on methods and techniques for answering: * Factoid questions: What is the capital of Morocco? * Cause questions: Why is there no cure for the cold? * Biography questions: What do you know about Dick Cheney? * Event questions: What do you know about the Kobe earthquake? TextMap employs a combination of rule-based and supervised and unsupervised machine learning algorithms that are trained on massive amounts of data.

the-silver-searcher-20200129


A code searching tool similar to `ack`, with a focus on speed.

thes-ga-ie-2


This is version 1.001 of Lonra Simeantach na Gaeilge for OpenOffice.org.

ticcutils-20170708


This module contains useful functions for general use in the TiCC software stack and beyond.

tielt-083


This interface is created for the chess.xml game

tifmo-20140707


TIFMO (Textual Inference Forward-chaining MOdule) is an unsupervised Recognizing Textual Entailment (RTE) system based on Dependency-based Compositional Semantics (DCS) and logical inference.

timbl-6.4.9


TiMBL is an open source software package implementing several memory-based learning algorithms, among which IB1-IG, an implementation of k-nearest neighbor classification with feature weighting suitable for symbolic feature spaces, and IGTree, a decision-tree approximation of IB1-IG. All implemented algorithms have in common that they store some representation of the training set explicitly in memory. During testing, new cases are classified by extrapolation from the most similar stored cases.

timbuk3.2-20181124


Timbuk is a tool designed to compute or over-approximate sets of terms reachable by a given term rewriting system. The libray also provides an OCaml toplevel with all usual functions on Bottom-up Nondeterministic Tree Automata. -----------------------------------------------------------------------

tinycog-0.0.1


TinyCog is written in SWI-Prolog, so you have to install SWI-Prolog V6.6 or higher from www.swi-prolog.org/download/. On Windows just double-click on any of the *.pl files below. Each file contains a test predicate that will display status information.

tla-mode-0.4


This directory holds a preliminary alpha release of TLA mode, a major mode for GNU Emacs supporting the specification process of specifications based on TLA+.

tla-mode-doc-0.4


This distribution contains the Info and DVI files for the documentation of the 3.69 test release of GNU Make. The program and documentation sources may be found in the file make-3.69.tar.Z, probably available from the same place you got this.

torch-rnn-master-20160416


This will produce files `my_data.h5` and `my_data.json` that will be passed to the training script.

toxic-comments-classification-20210114


Disclaimer: the dataset for this competition contains text that may be considered profane, vulgar, or offensive.

tptp-6.0.0


Conditions of use ----------------- The principal motivation for the TPTP is to support the testing and evaluation of ATP systems, to help ensure that performance results accurately reflect capabilities of the ATP systems being considered. You should abide by the following conditions when using TPTP problems and presenting your results. + The TPTP release number must be stated. + Each problem must be referenced by its unambiguous name. + The problem formulae should, as far as is possible, not be changed in any way. Any changes made (addition, removal, reordering, reformatting, etc.) must be explicitly noted. + Any information given to the ATP system, other than that in the formulae, must be explicitly noted. All system switches and settings must be recorded. The header information in TPTP problems may not be used by the ATP system without explicit notice.

tptp-7.2.0


Conditions of use ----------------- The principal motivation for the TPTP is to support the testing and evaluation of ATP systems, to help ensure that performance results accurately reflect capabilities of the ATP systems being considered. You should abide by the following conditions when using TPTP problems and presenting your results. + The TPTP release number must be stated. + Each problem must be referenced by its unambiguous name. + The problem formulae should, as far as is possible, not be changed in any way. Any changes made (addition, removal, reordering, reformatting, etc.) must be explicitly noted. + Any information given to the ATP system, other than that in the formulae, must be explicitly noted. All system switches and settings must be recorded. The header information in TPTP problems may not be used by the ATP system without explicit notice.

tptp2x-v-6.4.0


Introduction ------------ The tptp2X utility is a multi-functional utility for reformating, transforming, and generating TPTP problem files. In particular, it + Converts from the TPTP format to formats used by existing ATP systems. + Applies various transformations to the clauses of TPTP problems. + Controls the generation of TPTP problem files from TPTP generator files.

transcriptserver3-1.3.0


This version of TS3 has a new column, called unsubInLabels in peer_label table. Consequently there is a new foreign key constraint and optional drop statement.

transpiler-20200205


*Universal-transpiler* is a source-to-source compiler that translates a small subset of several programming languages into several others. It is also able to translate several metasyntax notations, such as EBNF and ABNF. The translation is not always 100% accurate, but I hope it will still be useful.

tranx-20200229


A general-purpose **Tran**sition-based abstract synta**X** parser that maps natural language queries into machine executable source code (e.g., Python) or logical forms (e.g., lambda calculus). **[Online Demo](http://moto.clab.cs.cmu.edu:8081/)**.

tree-tagger-arm6-4.3.2


* -proto: If this option is specified, the tagger creates a file named "lexicon-protocol.txt", which contains information about the degree of ambiguity and about the other possible tags of a word form. The part of the lexicon in which the word form has been found is also indicated. 'f' means fullform lexicon and 's' means affix lexicon. 'h' means that the word contains a hyphen and that the part of the word following the hyphen has been found in the fullform lexicon. * -eps : Value which is used to replace zero lexical frequencies. This is the case if a word/tag pair is contained in the lexicon but not in the training corpus. The default is 0.1. The choice of this parameter has some minor influence on tagging accuracy. * -beam : If the tagger is slow, this option can be used to speed it up. Good values for are in the range 0.001-0.00001. * -base: If this option is specified, only lexical information is used for tagging but no contextual information about the preceding tags. This option is only useful in order to obtain a baseline result to which to compare the actual tagger output.

tree-tagger-linux-3.2.1


* -proto: If this option is specified, the tagger creates a file named "lexicon-protocol.txt", which contains information about the degree of ambiguity and about the other possible tags of a word form. The part of the lexicon in which the word form has been found is also indicated. 'f' means fullform lexicon and 's' means affix lexicon. 'h' means that the word contains a hyphen and that the part of the word following the hyphen has been found in the fullform lexicon. * -eps : Value which is used to replace zero lexical frequencies. This is the case if a word/tag pair is contained in the lexicon but not in the training corpus. The default is 0.1. The choice of this parameter has some minor influence on tagging accuracy. * -beam : If the tagger is slow, this option can be used to speed it up. Good values for are in the range 0.001-0.00001. * -base: If this option is specified, only lexical information is used for tagging but no contextual information about the preceding tags. This option is only useful in order to obtain a baseline result to which to compare the actual tagger output.

tree-tagger-macosx-3.2


* -proto: If this option is specified, the tagger creates a file named "lexicon-protocol.txt", which contains information about the degree of ambiguity and about the other possible tags of a word form. The part of the lexicon in which the word form has been found is also indicated. 'f' means fullform lexicon and 's' means affix lexicon. 'h' means that the word contains a hyphen and that the part of the word following the hyphen has been found in the fullform lexicon. * -eps : Value which is used to replace zero lexical frequencies. This is the case if a word/tag pair is contained in the lexicon but not in the training corpus. The default is 0.1. The choice of this parameter has some minor influence on tagging accuracy. * -beam : If the tagger is slow, this option can be used to speed it up. Good values for are in the range 0.001-0.00001. * -base: If this option is specified, only lexical information is used for tagging but no contextual information about the preceding tags. This option is only useful in order to obtain a baseline result to which to compare the actual tagger output.

treetagger-3.1


* -token: The words/tokens are printed in addition to the POS tags * -lemma: Lemmas are printed as well. * -sgml: This option instructs the tagger to ignore tokens which start with '<' and end with '>' (SGML tags). * -lex : The file contains additional lexicon entries to be used by the tagger. The file format is identical to the format of the lexicon argument of the training program (see below). * -no-unknown: If an unknown word is encountered, emit the word form as lemma. This was previously the default behaviour. Now, the default behaviour is to print "" as lemma. * -threshold

: This option tells the tagger to print all tags of a word with a probability higher than

times the largest probability. (The tagger will use a different algorithm in this case and the set of best tags might be different from the tags generated without this option.) * -prob: Print tag probabilities (in combination with option -threshold) * -pt-with-prob: If this option is specified, then each pretagging tag (see above) has to be followed by a whitespace and a tag probability value. * -pt-with-lemma: If this option is specified, then each pretagging tag (see above) has to be followed by a whitespace and a lemma. Lemmas may contain blanks. If both -pt-with-prob and -pt-with-lemma have been specified, then each pretagging tag is followed by a probability and a lemma in that order.

trindikit-4.0.2


This is an alpha release of Trindikit4.

trueviz-1.02


TrueViz (ground TRUth/metadata Editing & VIsualiZing Toolkit) is a tool for visualizing and editing groundtruth and metadata for OCR. TrueViz is developed in Java programming language which is executable in various platforms. TrueViz reads/stores groundtruth and meta data in XML format, and reads a corresponding image stored in tif image file format.

ts-20201129


This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.

tsimmis-20181123


This version of Tsimmis code contains the following components:

ttk-20180518


This is the main repository for the Tarsqi Toolkit (TTK), a set of processing components for extracting temporal information from news wire texts. TTK extracts time expressions, events, subordination links and temporal links; in addition, it can ensure consistency of temporal information.

udeplambda-20191020


UDepLambda is a framework to convert Universal Dependencies trees to Logical Forms. It maps natural language to logical forms in an almost language-independent framework. For more details, please refer to our papers below.

uiuc-ie-pipeline-fine-grained-20201114


### Running on raw text data * Prepare a data directory `data` containing sub-directories `rsd` and `ltf`. The `rsd` sub-directory contains RSD (Raw Source Data, ending with `*.rsd.txt`), and `ltf` sub-directory has LTF (Logical Text Format, ending with `*.ltf.xml`) files. * If you have RSD files, please use the [`aida_utilities/rsd2ltf.py`](https://github.com/limanling/uiuc_ie_pipeline_finegrained_source_code/blob/master/aida_utilities/rsd2ltf.py) to generate the LTF files. ```bash docker run --rm -v ${ltf_dir}:${ltf_dir} -v ${rsd_dir}:${rsd_dir} -i limanling/uiuc_ie_m36 /opt/conda/envs/py36/bin/python /aida_utilities/rsd2ltf.py --seg_option nltk+linebreak --tok_option nltk_wordpunct --extension .rsd.txt ${rsd_dir} ${ltf_dir} ``` * If you have LTF files, please use the AIDA ltf2rsd tool (`LDC2018E62_AIDA_Month_9_Pilot_Eval_Corpus_V1.0/tools/ltf2txt/ltf2rsd.perl`) to generate the RSD files. * Start services ```bash sh set_up_m36.sh ``` * Run the scripts. Note that the file paths are absolute paths. ```bash sh pipeline_full_en.sh ${data_root} ``` For example, ```bash sh pipeline_full_en.sh ${PWD}/data/testdata_dryrun ```

ukb-2.1


UKB is a collection of programs for performing graph-based Word Sense Disambiguation and lexical similarity/relatedness using a pre-existing knowledge base.

ulo-20200803


# The Upper Library Ontology (for metadata on theorem prover libraries) This repository contains the [OWL2](https://www.w3.org/TR/owl2-overview/) implementation of the Upper Library Ontology [ulo.owl](ulo.owl) and [OWLDoc documentation](OWLDoc/).

umop-1.2


This is Release 1.2 of the Universal Multi-agent Obdd-based Planner.

unbeast-0.6


+-------------------------------------------------------------------+ | 0. About this tool | +-------------------------------------------------------------------+ The README file you are reading is part of the distribution of the Unbeast tool for synthesis of finite state systems from specifications written in LTL. Note that this is a prototype tool and is mainly distributed to allow other researchers in this area to compare their implementations against this one. As a prototype tool, bugs are likely to exist.

unison-20210412


[Unison](https://unisonweb.org) is a new programming language, currently under active development. It's a modern, statically-typed purely functional language, similar to Haskell, but with the ability to describe entire distributed systems with a single program. Here's an example of a distributed map-reduce implementation:

universe-starter-agent-20190223


The codebase implements a starter agent that can solve a number of `universe` environments. It contains a basic implementation of the [A3C algorithm](https://arxiv.org/abs/1602.01783), adapted for real-time environments.

unixodbc-2.3.4


There is a problem it seems with libtool on OSX that incorrectly sets the SHLIBEXT. The Driver Manager code will now report this if its used in this condition. There are two solutions. Either after running configure check config.h, and search for the string SHLIBEXT. It should look like this:

upshot-montague-20170112


`montague` is a little CCG semantic parsing library for Scala.

usc-ds-relationextraction-20200725


# USC Distantly-supervised Relation Extraction System This repository puts together recent models and data sets for **sentence-level relation extraction** *using knowledge bases (i.e., distant supervision)*. In particular, it contains the source code for WWW'17 paper *[CoType: Joint Extraction of Typed Entities and Relations with Knowledge Bases](https://arxiv.org/pdf/1610.08763.pdf)*.

utility-monitor-20200116


This uses [rtlamr][rtlamr] to process the radio broadcasts by the meter. I live in a less dense location than the blog author so only picked up three meters using the `idm+` message. My meter included a serial number on its face that directly matched one of those three meters so it was very easy to get the right reading.

uvi-20191018


This file contains suggestions we may need to pay attention to when maintaining UVI. If you have any ideas or make any changes, please wirte them in this file.

uwn-tsv-20191121


UWN is an automatically constructed multilingual lexical knowledge base based on the structure of Princeton WordNet. Please see the web site above for more information.

vagrant-mutate-20191009


Vagrant-mutate is a vagrant plugin to convert vagrant boxes to work with different providers.

val-20191127


This repository hosts tools for AI Planning plans and planning models.

val-20210114


This repository hosts tools for AI Planning plans and planning models.

val-4.2.08


This code is written in C++ using the STL. It is known to compile under Linux with Gnu C++ (3.4.0, 3.3.3, 3.2.2 and 2.96) and the associated STL.

val-4.2.09


This code is written in C++ using the STL. It is known to compile under Linux with Gnu C++ (3.4.0, 3.3.3, 3.2.2 and 2.96) and the associated STL.

valex-20200306


This directory includes the following materials:

vampire-20210424


![GitHub Workflow Status (branch)](https://img.shields.io/github/workflow/status/vprover/vampire/CI/master) ![GitHub release (latest by date)](https://img.shields.io/github/v/release/vprover/vampire)

vampire-4.2.2


This is a brief introduction to this repository. Please see the Vampire website for more general information about Vampire. Please see LICENCE for usage restrictions. Note that Vampire makes use of minisat and z3 and some of this code is included in this codebase, such code is provided under their own licence.

visualsfm-linux-20170723


A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT

viz.js-20210306


This project builds [Graphviz](http://www.graphviz.org) with [Emscripten](http://kripken.github.io/emscripten-site/) and provides a simple wrapper for using it in the browser.

vonda-20191115


VOnDA is a framework for the implementation of reactive dialogue management functionality in dialogue systems for virtual agents. Although domain-independent, VOnDA is tailored towards dialogue systems with a focus on social communication, which implies the need of a long-term memory and high user adaptivity.

vs-code-default-keybindings-20210707


A list of the default keybindings for VS Code is surprisingly hard to find, even in the VS Code source, so I collected them all here. I've also included `negative` keybindings, which unmap the keybindings.

vscode-20210710


This repository ("`Code - OSS`") is where we (Microsoft) develop the [Visual Studio Code](https://code.visualstudio.com) product together with the community. Not only do we work on code and issues here, we also publish our [roadmap](https://github.com/microsoft/vscode/wiki/Roadmap), [monthly iteration plans](https://github.com/microsoft/vscode/wiki/Iteration-Plans), and our [endgame plans](https://github.com/microsoft/vscode/wiki/Running-the-Endgame). This source code is available to everyone under the standard [MIT license](https://github.com/microsoft/vscode/blob/main/LICENSE.txt).

vscode-emacs-mcx-20210709


This Visual Studio Code extension provides emacs-like keybindings and operations. This is inspired by [the great vscode extension by hiro-sun](https://github.com/hiro-sun/vscode-emacs) and its forks such as [vscode-emacs-friendly by Sebastian Zaha](https://github.com/SebastianZaha/vscode-emacs-friendly), [vscode-emacs-improved by rkwan94](https://github.com/rkwan94/vscode-emacs) and [vscode-emacs-neon by NotKyon](https://github.com/NotKyon/vscode-emacs-neon).

vscode-pddl-20210128


This extension makes VS Code a great place for modeling planning domains.

web-page-classification-20180805


This repository contains all scripts associated with my research on topical Web-page classification. You can read the full paper describing the task, experiments, and results [here](paper.pdf).

web-speech-api-20200613


Tap the screen then say a colour — the grammar string contains a large number of HTML keywords to choose from, although we've removed most of the multiple word colors to remove ambiguity. We did keep goldenrod, cos, well.

weblegends-20200119


### What is weblegends? weblegends is a DFHack plugin that runs a web server, inside Dwarf Fortress, that allows you to view your entire world's history, artifacts, settlments, heros, and so much more... over the internet or just locally.

webnav-20210117


WebNav is a benchmark task for evaluating an agent with abilities to understand natural language and plan on partially observed environments. In this challenging task, an agent navigates through a web site consisting of web pages and hyperlinks to find a web page in which a query appears.

webnewscrawler-1.0


WebNews Crawler is a java application to crawl (download, fetch) resources via HTTP. You can use it as a generic crawler to download WEB pages from Internet. It has a set of filters to limit and focus your crawling process. In addition WebNews Crawler comes with powerful HTML2XML library that can extract desired data from HTML pages and represent it in XML format. Together with ability to parse RSS feeds this crawler is useful for acquiring and cleaning WEB news articles.

wekan-20200428


Wekan is an completely [Open Source][open_source] and [Free software][free_software] collaborative kanban board application with MIT license.

wenyan-20200731


This project exists thanks to all the people who contribute. [[Contribute](CONTRIBUTING.md)].

wernicke-20210410


A redaction tool for structured data. Run `wernicke` with JSON on stdin, get redacted values out. Preserves structure and (to some extent) semantics. You might want this because you have test data where the actual values are sensitive. Because the changes are consistent within the data and the overall data structure is preserved, there a better chance your data will stay suitable for testing, even though it's been scrubbed.

why-3.1.1


Why3 is a platform for deductive program verification. It provides a rich language for specification and programming, called WhyML, and relies on external theorem provers, both automated and interactive, to discharge verification conditions. Why3 comes with a standard library of logical theories (integer and real arithmetic, Boolean operations, sets and maps, etc.) and basic programming data structures (arrays, queues, hash tables, etc.). A user can write WhyML programs directly and get correct-by-construction OCaml programs through an automated extraction mechanism. WhyML is also used as an intermediate language for the verification of C, Java, or Ada programs.

wikipedia-datav-1.0


This data contains Wikipedia pages for which links between pages have been annotated with a relation type, e.g. father, education, superior, etc. This data was created in collaboration between the University of Massachusetts and Google, Inc.

wnsnsmap-3.0


This is the README file for WordNet 3.0

wordnet-2.0


If you want to modify and compile the WordNet graphical interface ("wnb"/"wishwn") code for use with your own application, or if under Linux you have trouble getting "wnb" to run with the version of Tcl/Tk that is installed on your sytem, then you should install the Tcl/Tk 8.x libraries on your system. ("x" is a placeholder for the specific minor version number. "wnb" was built with 8.0 on some platforms and 8.2 on others.)

wordnet-prolog-20200110


* _WNprolog-3.0BF.tar.gz_ is a bugfix release of _WNprolog-3.0_. It fixes some known problems, including the transitive hyponym bug.

workflow-prolog-20191112


N is a unique identifier, P is the participant name, and MC is the message content.

wsdgate-0.05


WSDGate is an end-to-end Supervised Word Sense Disambiguation (WSD) framework developed by making use of existing resources such as GATE (General Architecture for Text Engineering) and WEKA (Waikato Environment for Knowledge Analysis). It also makes use of NSPGate, which is a GATE processing resource that acts as a wrapper around the Ngram Statistics Package (NSP).

www-flatten-20200802


WWW::Flatten is a web crawling tool for freezing pages into standalone. I believe this works better than wget or "Saving as, complete" in browsers.

xchange-20180621


XChange is a Java library providing a simple and consistent API for interacting with 60+ Bitcoin and other crypto currency exchanges providing a consistent interface for trading and accessing market data.

xmc-20170804


A Logic-Programming-Based Model Checker

xsb-20140305


A 2-volume manual in pdf format is distributed with XSB binaries or source files.

xsb-340


A 2-volume manual in pdf format is distributed with XSB binaries or source files.

xtools-20191110


This library contains several development tools, although not all are listed here, the most stable and relevant ones follows:

xtux-20030207


A link to the XTux-devel mailing list is on http://xtux.sourceforge.net

xvpviewer-1.13.1


This distribution is based on the standard VNC source and includes new TightVNC-specific features and fixes, such as additional low-bandwidth optimizations, major GUI improvements, and more.

xwn-2.0.1


The XWN 2.0-1 release is based on WordNet 2.0

yancy-20200504


[Yancy](https://metacpan.org/pod/Yancy) is a simple content management system (CMS) for administering content in a database. Yancy accepts a configuration file that describes the data in the database and builds a website that lists all of the available data and allows a user to edit data, delete data, and add new data.

yodaqa-1.6


YodaQA is an open source Factoid Question Answering system that can produce answer both from databases and text corpora using on-the-fly information extraction. By default, open domain question answering is performed on top of the Freebase and DBpedia knowledge bases as well as the texts of enwiki articles.

yodaqa-20191124


YodaQA is an open source Factoid Question Answering system that can produce answer both from databases and text corpora using on-the-fly information extraction. By default, open domain question answering is performed on top of the Freebase and DBpedia knowledge bases as well as the texts of enwiki articles.

yolov5-20201012


This repository represents Ultralytics open-source research into future object detection methods, and incorporates our lessons learned and best practices evolved over training thousands of models on custom client datasets with our previous YOLO repository https://github.com/ultralytics/yolov3. **All code and models are under active development, and are subject to modification or deletion without notice.** Use at your own risk.

youtube-dl-20181224


# DESCRIPTION **youtube-dl** is a command-line program to download videos from YouTube.com and a few more sites. It requires the Python interpreter, version 2.6, 2.7, or 3.2+, and it is not platform specific. It should work on your Unix box, on Windows or on macOS. It is released to the public domain, which means you can modify it, redistribute it or use it however you like.

youtube-upload-20170331


_Youtube-upload_ is a command line Python script that uploads videos to Youtube (it should work on any platform -GNU/Linux, BSD, OS X, Windows, ...- that runs Python) using theYoutube [APIv3](https://developers.google.com/youtube/v3/).

yuvmotionfps-1.6


This is a simple build environment taken from mjpegtools

z3-20180617


[CMake](https://cmake.org/) is a "meta build system" that reads a description of the project written in the ``CMakeLists.txt`` files and emits a build system for that project of your choice using one of CMake's "generators". This allows CMake to support many different platforms and build tools. You can run ``cmake --help`` to see the list of supported "generators" on your platform. Example generators include "UNIX Makefiles" and "Visual Studio 12 2013".

z3-20200120


[CMake](https://cmake.org/) is a "meta build system" that reads a description of the project written in the ``CMakeLists.txt`` files and emits a build system for that project of your choice using one of CMake's "generators". This allows CMake to support many different platforms and build tools. You can run ``cmake --help`` to see the list of supported "generators" on your platform. Example generators include "UNIX Makefiles" and "Visual Studio 12 2013".

z3-20210314


Z3 is a theorem prover from Microsoft Research. It is licensed under the [MIT license](LICENSE.txt).

zeros-silo-20200407


This should report that it passes all the tests. If not, something might be wrong with your configuration, or there may be some incompatibility between the script and your system. If you suspect the later, let me know the details!

zmeventserver-20190103


A WSS (Secure Web Sockets) and/or MQTT based event notification server that broadcasts new events to any authenticated listeners. (As of 0.6, it also includes a non secure websocket option, if that's how you want to run it)

zone-matrix-wake-up-20190729


I can't believe it has been 20 years already since the release of The Matrix movie.