.
FRDCSA | minor codebases | NLU
Homepage

[Project image]
NLU

Architecture Diagram: GIF
Code: GitHub

Jump to: Project Description | Parent Description | Capabilities

Project Description

Interprets text in the appropriate context.

Capabilities

  • paperless-office export roundtrip to nlu-mf
  • Think about combining our nlu with E2C
  • nlu see https://www.textrazor.com/rules
  • Have audience check our mail and process with nlu for potential action items.
  • Make C-css, if called multiple times in a row, cycle between the most common types of extractors, or be context sensitive according to nlu.
  • Create - similar to the nlu thing - a series of font-lock filters for different information about cyc-mode buffers, and then have them toggleable/switchable with keybindings. So, for instance, you could have predicates one font, individuals another and collections another, or something like that but sound.
  • Use nlu to develop context sensitive commands for emacs buffers.
  • Create a feature for FreeKBS2 and nlu which analyzes a Cyc Constant the first time it is seen, and record as such, such as when it was seen, etc.
  • nlu should be sensitive to page breaks and conventions concerning the beginnings of books.
  • Consider using overlays instead of properties for nlu.
  • This may be risky, but maybe we should look into a market based allocation for attention for nlu/thinker.
  • Have nlu study Rosetta-Code
  • When we have iaec and nlu done, use it to analyze source code like a human would, simulating things based on their knowledge of the language. Like what happens if you use a given statement at the beginning of the program, etc.
  • With iaec and nlu etc, set up taint analysis so that we can determine which information we can distribute and such.
  • nlu iaec et al should consider the use of FramerD
  • Apply nlu to vagrant traces, have it debug things. Use representations in prologcyc.
  • Analyze 'Do tell!' with nlu.
  • Enter all of Michael Iltis' documents and software into the KBFS system, and begin annotating it using nlu and such, in order to derive our representation and also to have the information required. Write a document mentioning all the different layers. Fix the nlu code to work with KBFS. etc.
  • Note how the quote might induce nlu to think that it is a quote, and then the script for quotations might suggest that they have an associated author. That could prompt the question of who is the author of the quote. Then, with that context, it would interpret the comment BPS, and make the abduction, maybe BPS is the author.
  • Just as in how I would define things that I hadn't seen very often by remembering all the bindings or examples of their use, nlu/iaec/sayer2 should remember all invocations of text / data in context. So for instance, if it saw a particular word, it would track all usages of the word, and all data sources like WordNet mentioning it, and cross reference them.
  • AM doesn't understand details about the reasons, extend it in that way - use nlu to analyze reasons, or something like that.
  • Look into using Wise with nlu
  • Integrate nlu with Clear for reading a given buffer to the user, marking that it has been read, etc.
  • Use nlu and academician etc to have a declare-region-read function.
  • If I write an Inform7 parser, use nlu to write it in.
  • Develop a context mechanism for assessing meaning, add to nlu. For instance, the expression "Chris might know someone" had a lot of context, that we were looking at the time for someone to do tech support, things like that. All these assertions could be made in a KB and used as contextual knowledge for the statement. Build this manually at first, followed by automatically.
  • nlu should use an algorithm which takes all the objects in sayer that are plain text instances and matches them in strings. For instance if the following were sayer data points: 'the' and 'there', and the text read "there's a lot of stuff", it would assert the matches for 'the' and 'there'. Obviously this needs to be constrained somehow, as there would often be individual words, so there might be an interestingness or relevance constraint - or maybe some kind of procedural semantics.
  • Apply nlu/KBFS to the web, not just local files.
  • Make KBFS2 very robust, add a lot of features. Then use it to classify different files for release. Have it inspect files using a decision agent, and auto-redact parts as needed. Make as part of a separate codebase to MyFRDCSA. Have it have multiple modes about reckoning about files. Then start asserting a lot of stuff, rewrite academician and nlu to use it, and so on.
  • Integrate nlu with an agent for a system that understands text and can act on it. Have it use rte to see if the text implies certain things, and then input to BDI.
  • Think of nlu in terms of perception.
  • Get nlu's sayer working correctly, it seems to always have the same entry id.
  • Develop font highlighting system for nlu knowledge and other FreeKBS2 knowledge.
  • Looks like the name "suppositional-reasoner" may have shifted in meaning over time. I think originally it was supposed to be used for proposing and testing hypotheses, much like nlu and thinker. But now it appears to have become something to suppose moves in a search. Weird.
  • Develop the ImportExport guess function into nlu somehow.
  • Add to academician the ability to make notes specific to individual pages or sentences, use nlu type constraints.
  • Add crypto-signing to knowledge from sayer and nlu, etc.
  • Use Deep Learning with nlu.
  • Add to academician / nlu / system/index.html">argument-system / etc the ability to argue with a text and store the argument.
  • When trying to generate unix commands using the ai, perhaps we can start by first decomposing them with nlu - and one way to decompose is to embed the results of a parse into the nlu logic, and if we add grammars for lots of things, that should take care of it.
  • For setanta-agent, one use case is to load it onto a VM, and give it user or even root access, and let it learn as it goes, moving files, etc. For this use case, one could think of hooking up nlu to it to understand the response of shell commands and things like that.
  • Input emacs symbols etc into the context of the Emacs nlu system.
  • Make nlu for webkit etc.
  • Rework the system for nlu etc to use kbfs tags for representing the contents of a buffer. so for instance, it will know the buffers title, etc. these will be properties specific of individual kinds of files, for instance an nnvirtual:article buffer has a summary, store that, etc, in kbfs and associate with the annotations.
  • Maybe make nlu etc into minor modes that can be loaded multiply
  • Use nlu to represent the specific context of statements.
  • nlu should consult http://web.mit.edu/mecheng/pml/standards.htm
  • Develop nlu to augment text like some of the other web-based systems I've written, in fact, adapt those to work with nlu. So we should be able to click on the definition of the current item.
  • Fix this when doing C-c n t o s: basic-save-buffer-2: Opening output file: file name too long, /var/lib/myfrdcsa/codebases/minor/nlu/data/ghost-buffers/I_ve_written_several_Emacs_modes_for_various_obscure_or_in_house_tool_languages_When_starting_my_first_mode_I_found_that_there_weren_t_a_lot_of_lucid_explanations_of_how_to_write_a_mode_intended_for_language_editing_Even_Writing_GNU_Emacs_Extensions_ISBN_1565922611_alternate_search_doesn_t_cover_topics_like_syntax_highlighting_and_indentation_And_generic_mode_distributed_with_recent_versions_of_Emacs_doesn_t_handle_indentation_-7b3174ce1a04a0a3f86435130bd4ea16
  • Add to nlu a "language" recognition system that can determine if particular regions of text are a given programming language, pseudo code, or what not.
  • Build the sayer/nlu/KBFS system that asserts information about files and explores all the possible things to assert about them.
  • nlu mode enables comments in code using annotations
  • nlu/KBFS/sayer should practice as in the field of deduction.
  • nlu/KBFS/sayer should say: "Consider this file", and then begin making notes about it.
  • Use nlu to process our Email through audience.
  • Have the ability for nlu to reason with multiple possible meanings simultaneously. For instance, suppose there was a word on one page ending with a -, and then a different word on the next page, but it was unclear whether these were one conjoined word or simply two words separated by a dash. nlu should be able to answer questions and such regarding all or some meanings of these.
  • Have the option to query the commands that can be run on the given entries in the freekbs2-stack. For instance, if they haven't been processed with nlu/sayer/KBFS
  • Offer the ability to correct automatic annotations by nlu/sayer/KBFS
  • Have 't q' in nlu not strip faces and other properties which aren't related to the tags.
  • Have a flag in nlu/Capability::TextAnalysis to toggle preventing text from being analyzed by cloud-based systems.
  • Look into having nlu also text properties like boldface etc.
  • Add something to nlu to assert the region that was processed with the nlu-analyze-region command.
  • Add something to nlu to prevent it from applying a tag that already exists. Have it have some kind of exception thrown. Look into throwing exceptions in Emacs.
  • Integrate GATE/nlu
  • We need to read math texts using nlu, or find libraries of mathematical knowledge
  • Use MaxTract with nlu
  • Use nlu on math papers.
  • auto-packager should use data enrichment of package orig.tar.gz, debian/* and included patches via sayer/thinker, nlu and kbfs as input features to various machine learning systems in order to determine how to automatically package something for Debian. brilliant. difficult though.
  • For this diff capability for nlu, it should for instance allow us to index the text rules on official sites and related sites regarding Debian packaging (the information which will be translated into flora2 or equivalent rules for packaging) and compare across versions, such as when a new rule is added.
  • Add a diff capability to nlu, track different versions of the same article using KBFS or equivalent.
  • Finally create critic using KBFS and nlu.
  • Create commands between nlu and KBFS that allow you to assert knowledge about the text contents of files.
  • Use academician in conjuction with nlu to represent in a cryptographically strong fashion the various text "snippets" and documents and data files.
  • Add the ability for nlu to use sayer information in it's output.
  • Use nlu for machine learning of features from text.
  • Have the ability to relate snippets from the same source, so for instance we can use nlu to analyze the whole document or just a part, and still have a relation that indicates that that is just that part of that document. Consider how this affects things where context is important such as dialogue acts.
  • Integrate version control (esp git) with nlu-Ghost.
  • Develop command processing using nlu using Flora2 and Computational Semantics
  • Integrate KBFS with nlu, so as to assert things about various revisions in git or what not.
  • Obtain the nlu system TIL2 and Bridge from the paper Textual Inference Logic: Take Two
  • nlu should use Epilog
  • nlu should inspect the arguments of KBS knowledge
  • Combine nlu with a WWW::Mechanize browser for intelligently browsing the web. Combine a BDI or similar engine to enable goal directed traversal.
  • Fix the nlu system to put better names on ghosted files because Emacs bookmarks does not see them.
  • Set up nlu navigation to only display a message if it differs from the last message, to save on how many messages are displayed
  • nlu: have it set buffer unwritable and only make it writable to make modifications to fonts, etc.
  • nlu: have it record the source of annotations, whether manual (and reviewer) or by what software program
  • nlu: Make is so that named entities stand out.
  • Fix the problem with running nlu-reset-buffer where it complains about: let: Symbol's value as variable is void: mark
  • nlu - Add the ability to take a given isa constraint and use that to make informed choices about what things can be done with that entity. For instance, a #$Software can be retrieved and a a #$ResearchPaper can be read.
  • Go over pse stuff and try to iron them out so that we can use PSE for reasoning about goals as well, even in an application such as nlu
  • Fix handling of things like this with nlu: • FIPA JACK: An extension to the JACK platform to support the FIPA agent Communications Language.^[4]
  • Reset all of the sayer data on nlu because it's all messed up, figure out where it's going wrong and right the wrong.
  • nlu should be able to tell if something should be capitalized.
  • Create an NL query interface for FreeKBS using NLGen and nlu
  • Classification server depends an awful lot on terminology management and nlu.
  • A dictionary of neologisms and colloguialisms might have good application in nlu?
  • the essential aspect of nlu for Cyc is verifying that the senses of the terms in the NL are the same ones as those in the KB, so there has to be the ability to verify that sense. Mapping wordnet or similar senses to to Cyc then would be a good start at literal translation, as any NL system which can map to wordnet, sensus or omega targets could then be translated to Cyc. There is a sensus to cyc translation I believe, that would be a start.


This page is part of the FWeb package.
Last updated Sat Oct 26 16:42:40 EDT 2019 .