Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 papers out of 72,799 papers

Trends in programming languages for neuroscience simulations.

  • Andrew P Davison‎ et al.
  • Frontiers in neuroscience‎
  • 2009‎

Neuroscience simulators allow scientists to express models in terms of biological concepts, without having to concern themselves with low-level computational details of their implementation. The expressiveness, power and ease-of-use of the simulator interface is critical in efficiently and accurately translating ideas into a working simulation. We review long-term trends in the development of programmable simulator interfaces, and examine the benefits of moving from proprietary, domain-specific languages to modern dynamic general-purpose languages, in particular Python, which provide neuroscientists with an interactive and expressive simulation development environment and easy access to state-of-the-art general-purpose tools for scientific computing.


A comparison of common programming languages used in bioinformatics.

  • Mathieu Fourment‎ et al.
  • BMC bioinformatics‎
  • 2008‎

The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python.


An evaluation framework and comparative analysis of the widely used first programming languages.

  • Muhammad Shoaib Farooq‎ et al.
  • PloS one‎
  • 2014‎

Computer programming is the core of computer science curriculum. Several programming languages have been used to teach the first course in computer programming, and such languages are referred to as first programming language (FPL). The pool of programming languages has been evolving with the development of new languages, and from this pool different languages have been used as FPL at different times. Though the selection of an appropriate FPL is very important, yet it has been a controversial issue in the presence of many choices. Many efforts have been made for designing a good FPL, however, there is no ample way to evaluate and compare the existing languages so as to find the most suitable FPL. In this article, we have proposed a framework to evaluate the existing imperative, and object oriented languages for their suitability as an appropriate FPL. Furthermore, based on the proposed framework we have devised a customizable scoring function to compute a quantitative suitability score for a language, which reflects its conformance to the proposed framework. Lastly, we have also evaluated the conformance of the widely used FPLs to the proposed framework, and have also computed their suitability scores.


Constructed languages are processed by the same brain mechanisms as natural languages.

  • Saima Malik-Moraleda‎ et al.
  • bioRxiv : the preprint server for biology‎
  • 2023‎

What constitutes a language? Natural languages share some features with other domains: from math, to music, to gesture. However, the brain mechanisms that process linguistic input are highly specialized, showing little or no response to diverse non-linguistic tasks. Here, we examine constructed languages (conlangs) to ask whether they draw on the same neural mechanisms as natural languages, or whether they instead pattern with domains like math and logic. Using individual-subject fMRI analyses, we show that understanding conlangs recruits the same brain areas as natural language comprehension. This result holds for Esperanto (n=19 speakers)- created to resemble natural languages-and fictional conlangs (Klingon (n=10), Na'vi (n=9), High Valyrian (n=3), and Dothraki (n=3)), created to differ from natural languages, and suggests that conlangs and natural languages share critical features and that the notable differences between conlangs and natural language are not consequential for the cognitive and neural mechanisms that they engage.


Composable languages for bioinformatics: the NYoSh experiment.

  • Manuele Simi‎ et al.
  • PeerJ‎
  • 2014‎

Language WorkBenches (LWBs) are software engineering tools that help domain experts develop solutions to various classes of problems. Some of these tools focus on non-technical users and provide languages to help organize knowledge while other workbenches provide means to create new programming languages. A key advantage of language workbenches is that they support the seamless composition of independently developed languages. This capability is useful when developing programs that can benefit from different levels of abstraction. We reasoned that language workbenches could be useful to develop bioinformatics software solutions. In order to evaluate the potential of language workbenches in bioinformatics, we tested a prominent workbench by developing an alternative to shell scripting. To illustrate what LWBs and Language Composition can bring to bioinformatics, we report on our design and development of NYoSh (Not Your ordinary Shell). NYoSh was implemented as a collection of languages that can be composed to write programs as expressive and concise as shell scripts. This manuscript offers a concrete illustration of the advantages and current minor drawbacks of using the MPS LWB. For instance, we found that we could implement an environment-aware editor for NYoSh that can assist the programmers when developing scripts for specific execution environments. This editor further provides semantic error detection and can be compiled interactively with an automatic build and deployment system. In contrast to shell scripts, NYoSh scripts can be written in a modern development environment, supporting context dependent intentions and can be extended seamlessly by end-users with new abstractions and language constructs. We further illustrate language extension and composition with LWBs by presenting a tight integration of NYoSh scripts with the GobyWeb system. The NYoSh Workbench prototype, which implements a fully featured integrated development environment for NYoSh is distributed at http://nyosh.campagnelab.org.


Consistency and Variability in Children's Word Learning Across Languages.

  • Mika Braginsky‎ et al.
  • Open mind : discoveries in cognitive science‎
  • 2019‎

Why do children learn some words earlier than others? The order in which words are acquired can provide clues about the mechanisms of word learning. In a large-scale corpus analysis, we use parent-report data from over 32,000 children to estimate the acquisition trajectories of around 400 words in each of 10 languages, predicting them on the basis of independently derived properties of the words' linguistic environment (from corpora) and meaning (from adult judgments). We examine the consistency and variability of these predictors across languages, by lexical category, and over development. The patterning of predictors across languages is quite similar, suggesting similar processes in operation. In contrast, the patterning of predictors across different lexical categories is distinct, in line with theories that posit different factors at play in the acquisition of content words and function words. By leveraging data at a significantly larger scale than previous work, our analyses identify candidate generalizations about the processes underlying word learning across languages.


The layer-oriented approach to declarative languages for biological modeling.

  • Ivan Raikov‎ et al.
  • PLoS computational biology‎
  • 2012‎

We present a new approach to modeling languages for computational biology, which we call the layer-oriented approach. The approach stems from the observation that many diverse biological phenomena are described using a small set of mathematical formalisms (e.g. differential equations), while at the same time different domains and subdomains of computational biology require that models are structured according to the accepted terminology and classification of that domain. Our approach uses distinct semantic layers to represent the domain-specific biological concepts and the underlying mathematical formalisms. Additional functionality can be transparently added to the language by adding more layers. This approach is specifically concerned with declarative languages, and throughout the paper we note some of the limitations inherent to declarative approaches. The layer-oriented approach is a way to specify explicitly how high-level biological modeling concepts are mapped to a computational representation, while abstracting away details of particular programming languages and simulation environments. To illustrate this process, we define an example language for describing models of ionic currents, and use a general mathematical notation for semantic transformations to show how to generate model simulation code for various simulation environments. We use the example language to describe a Purkinje neuron model and demonstrate how the layer-oriented approach can be used for solving several practical issues of computational neuroscience model development. We discuss the advantages and limitations of the approach in comparison with other modeling language efforts in the domain of computational biology and outline some principles for extensible, flexible modeling language design. We conclude by describing in detail the semantic transformations defined for our language.


Forming social impressions from voices in native and foreign languages.

  • Cristina Baus‎ et al.
  • Scientific reports‎
  • 2019‎

We form very rapid personality impressions about speakers on hearing a single word. This implies that the acoustical properties of the voice (e.g., pitch) are very powerful cues when forming social impressions. Here, we aimed to explore how personality impressions for brief social utterances transfer across languages and whether acoustical properties play a similar role in driving personality impressions. Additionally, we examined whether evaluations are similar in the native and a foreign language of the listener. In two experiments we asked Spanish listeners to evaluate personality traits from different instances of the Spanish word "Hola" (Experiment 1) and the English word "Hello" (Experiment 2), native and foreign language respectively. The results revealed that listeners across languages form very similar personality impressions irrespective of whether the voices belong to the native or the foreign language of the listener. A social voice space was summarized by two main personality traits, one emphasizing valence (e.g., trust) and the other strength (e.g., dominance). Conversely, the acoustical properties that listeners pay attention to when judging other's personality vary across languages. These results provide evidence that social voice perception contains certain elements invariant across cultures/languages, while others are modulated by the cultural/linguistic background of the listener.


Semantic framework for mapping object-oriented model to semantic web languages.

  • Petr Ježek‎ et al.
  • Frontiers in neuroinformatics‎
  • 2015‎

The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework.


Human cortical encoding of pitch in tonal and non-tonal languages.

  • Yuanning Li‎ et al.
  • Nature communications‎
  • 2021‎

Languages can use a common repertoire of vocal sounds to signify distinct meanings. In tonal languages, such as Mandarin Chinese, pitch contours of syllables distinguish one word from another, whereas in non-tonal languages, such as English, pitch is used to convey intonation. The neural computations underlying language specialization in speech perception are unknown. Here, we use a cross-linguistic approach to address this. Native Mandarin- and English- speaking participants each listened to both Mandarin and English speech, while neural activity was directly recorded from the non-primary auditory cortex. Both groups show language-general coding of speaker-invariant pitch at the single electrode level. At the electrode population level, we find language-specific distribution of cortical tuning parameters in Mandarin speakers only, with enhanced sensitivity to Mandarin tone categories. Our results show that speech perception relies upon a shared cortical auditory feature processing mechanism, which may be tuned to the statistics of a given language.


First and second languages differentially affect rationality when making decisions: An ERP study.

  • Linyan Liu‎ et al.
  • Biological psychology‎
  • 2022‎

The present study examined how decision-making is affected by first (L1) and second languages (L2), emotion, and cognitive load. In a cross-task study, 30 Chinese-English bilinguals were asked to implement lexical-semantic judgment and gambling task. The results showed that after lexical decisions under high cognitive load, P3 was more positive for negative words than for neutral words in L1. The reverse was the case in the L2 in which P3 was more positive for neutral words compared to negative words. Critically, under high cognitive load, as the P3 effect increased for negative words relative to neutral words, the rationality of the decisions after these negative words decreased in the L1 but increased in the L2. The results moreover revealed that the increased Granger causal strength predicted more rational choices in the L2 high-load negative condition. Altogether, the findings offer evidence of how L1s and L2s can differentially influence rational decision-making.


Programming biological models in Python using PySB.

  • Carlos F Lopez‎ et al.
  • Molecular systems biology‎
  • 2013‎

Mathematical equations are fundamental to modeling biological networks, but as networks get large and revisions frequent, it becomes difficult to manage equations directly or to combine previously developed models. Multiple simultaneous efforts to create graphical standards, rule-based languages, and integrated software workbenches aim to simplify biological modeling but none fully meets the need for transparent, extensible, and reusable models. In this paper we describe PySB, an approach in which models are not only created using programs, they are programs. PySB draws on programmatic modeling concepts from little b and ProMot, the rule-based languages BioNetGen and Kappa and the growing library of Python numerical tools. Central to PySB is a library of macros encoding familiar biochemical actions such as binding, catalysis, and polymerization, making it possible to use a high-level, action-oriented vocabulary to construct detailed models. As Python programs, PySB models leverage tools and practices from the open-source software community, substantially advancing our ability to distribute and manage the work of testing biochemical hypotheses. We illustrate these ideas using new and previously published models of apoptosis.


Programming bacteria for multiplexed DNA detection.

  • Yu-Yu Cheng‎ et al.
  • Nature communications‎
  • 2023‎

DNA is a universal and programmable signal of living organisms. Here we develop cell-based DNA sensors by engineering the naturally competent bacterium Bacillus subtilis (B. subtilis) to detect specific DNA sequences in the environment. The DNA sensor strains can identify diverse bacterial species including major human pathogens with high specificity. Multiplexed detection of genomic DNA from different species in complex samples can be achieved by coupling the sensing mechanism to orthogonal fluorescent reporters. We also demonstrate that the DNA sensors can detect the presence of species in the complex samples without requiring DNA extraction. The modularity of the living cell-based DNA-sensing mechanism and simple detection procedure could enable programmable DNA sensing for a wide range of applications.


Inferring signaling pathways with probabilistic programming.

  • David Merrell‎ et al.
  • Bioinformatics (Oxford, England)‎
  • 2020‎

Cells regulate themselves via dizzyingly complex biochemical processes called signaling pathways. These are usually depicted as a network, where nodes represent proteins and edges indicate their influence on each other. In order to understand diseases and therapies at the cellular level, it is crucial to have an accurate understanding of the signaling pathways at work. Since signaling pathways can be modified by disease, the ability to infer signaling pathways from condition- or patient-specific data is highly valuable. A variety of techniques exist for inferring signaling pathways. We build on past works that formulate signaling pathway inference as a Dynamic Bayesian Network structure estimation problem on phosphoproteomic time course data. We take a Bayesian approach, using Markov Chain Monte Carlo to estimate a posterior distribution over possible Dynamic Bayesian Network structures. Our primary contributions are (i) a novel proposal distribution that efficiently samples sparse graphs and (ii) the relaxation of common restrictive modeling assumptions.


Minimizing bugs in cognitive neuroscience programming.

  • Vadim Axelrod‎
  • Frontiers in psychology‎
  • 2014‎

No abstract available


Development of a System for Storing and Executing Bio-Signal Analysis Algorithms Developed in Different Languages.

  • Moon-Il Joo‎ et al.
  • Healthcare (Basel, Switzerland)‎
  • 2021‎

With the development of mobile and wearable devices with biosensors, various healthcare services in our life have been recently introduced. A significant issue that arises supports the smart interface among bio-signals developed by different vendors and different languages. Despite its importance for convenient and effective development, however, it has been nearly unexplored. This paper focuses on the smart interface format among bio-signal data processing and mining algorithms implemented by different languages. We designed and implemented an advanced software structure where analysis algorithms implemented by different languages and tools would seem to work in one common environment, overcoming different developing language barriers. By presenting our design in this paper, we hope there will be much more chances for higher service-oriented developments utilizing bio-signals in the future.


Differential brain-to-brain entrainment while speaking and listening in native and foreign languages.

  • Alejandro Pérez‎ et al.
  • Cortex; a journal devoted to the study of the nervous system and behavior‎
  • 2019‎

The study explores interbrain neural coupling when interlocutors engage in a conversation whether it be in their native or nonnative language. To this end, electroencephalographic hyperscanning was used to study brain-to-brain phase synchronization during a two-person turn-taking verbal exchange with no visual contact, in either a native or a foreign language context. Results show that the coupling strength between brain signals is increased in both, the native language context and the foreign language context, specifically, in the alpha frequency band. A difference in brain-to speech entrainment to native and foreign languages is also shown. These results indicate that between brain similarities in the timing of neural activations and their spatial distributions change depending on the language code used. We argue that factors like linguistic alignment, joint attention and brain-entrainment to speech operate with a language-idiosyncratic neural configuration, modulating the alignment of neural activity between speakers and listeners. Other possible factors leading to the differential interbrain synchronization patterns as well as the potential features of brain-to-brain entrainment as a mechanism are briefly discussed. We concluded that linguistic context should be considered when addressing interpersonal communication. The findings here open doors to quantifying linguistic interactions.


Programming ultrasensitive threshold response through chemomechanical instability.

  • Young-Joo Kim‎ et al.
  • Nature communications‎
  • 2021‎

The ultrasensitive threshold response is ubiquitous in biochemical systems. In contrast, achieving ultrasensitivity in synthetic molecular structures in a controllable way is challenging. Here, we propose a chemomechanical approach inspired by Michell's instability to realize it. A sudden reconfiguration of topologically constrained rings results when the torsional stress inside reaches a critical value. We use DNA origami to construct molecular rings and then DNA intercalators to induce torsional stress. Michell's instability is achieved successfully when the critical concentration of intercalators is applied. Both the critical point and sensitivity of this ultrasensitive threshold reconfiguration can be controlled by rationally designing the cross-sectional shape and mechanical properties of DNA rings.


Language Familiarity and Proficiency Leads to Differential Cortical Processing During Translation Between Distantly Related Languages.

  • Katsumasa Shinozuka‎ et al.
  • Frontiers in human neuroscience‎
  • 2021‎

In the midst of globalization, English is regarded as an international language, or Lingua Franca, but learning it as a second language (L2) remains still difficult to speakers of other languages. This is true especially for the speakers of languages distantly related to English such as Japanese. In this sense, exploring neural basis for translation between the first language (L1) and L2 is of great interest. There have been relatively many previous researches revealing brain activation patterns during translations between L1 and English as L2. These studies, which focused on language translation with close or moderate linguistic distance (LD), have suggested that the Broca area (BA 44/45) and the dorsolateral prefrontal cortex (DLPFC; BA 46) may play an important role on translation. However, the neural mechanism of language translation between Japanese and English, having large LD, has not been clarified. Thus, we used functional near infrared spectroscopy (fNIRS) to investigate the brain activation patterns during word translation between Japanese and English. We also assessed the effects of translation directions and word familiarity. All participants' first language was Japanese and they were learning English. Their English proficiency was advanced or elementary. We selected English and Japanese words as stimuli based on the familiarity for Japanese people. Our results showed that the brain activation patterns during word translation largely differed depending on their English proficiency. The advanced group elicited greater activation on the left prefrontal cortex around the Broca's area while translating words with low familiarity, but no activation was observed while translating words with high familiarity. On the other hand, the elementary group evoked greater activation on the left temporal area including the superior temporal gyrus (STG) irrespective of the word familiarity. These results suggested that different cognitive process could be involved in word translation corresponding to English proficiency in Japanese learners of English. These difference on the brain activation patterns between the advanced and elementary group may reflect the difference on the cognitive loads depending on the levels of automatization in one's language processing.


Characteristics of mathematical modeling languages that facilitate model reuse in systems biology: a software engineering perspective.

  • Christopher Schölzel‎ et al.
  • NPJ systems biology and applications‎
  • 2021‎

Reuse of mathematical models becomes increasingly important in systems biology as research moves toward large, multi-scale models composed of heterogeneous subcomponents. Currently, many models are not easily reusable due to inflexible or confusing code, inappropriate languages, or insufficient documentation. Best practice suggestions rarely cover such low-level design aspects. This gap could be filled by software engineering, which addresses those same issues for software reuse. We show that languages can facilitate reusability by being modular, human-readable, hybrid (i.e., supporting multiple formalisms), open, declarative, and by supporting the graphical representation of models. Modelers should not only use such a language, but be aware of the features that make it desirable and know how to apply them effectively. For this reason, we compare existing suitable languages in detail and demonstrate their benefits for a modular model of the human cardiac conduction system written in Modelica.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: