Blog

We collect the key news feed from free RSS services,  the news is updated every 3 hours, 24/7.

Psychology

This metabolic brain boost revives memory in Alzheimer’s mice

PsycPORT™: Psychology Newswire An experimental cancer drug appeared to re-energize the brains of mice that had a form of Alzheimer’s — and even restore their ability to learn and remember. Go to Source 10/09/2024 - 06:02 / Twitter: @hoffeldtcom
Read More
Business News

Malaysia central banker sees rate hold in 2024 with growth at 5%

The Straits Times Business News Inflation won’t exceed 3 per cent, according to Bank Negara Malaysia’s deputy governor. Go to Source 10/09/2024 - 06:02 / Twitter: @hoffeldtcom
Read More
Psychology

Livestream Event: Suicide Prevention in Health Care Settings

NIMH News Feed In recognition of National Suicide Prevention Month in September, the National Institute of Mental Health (NIMH) and the Substance Abuse and Mental Health Services Administration (SAMHSA) are hosting a livestream event on suicide prevention in health care settings. Go to Source 31/08/2024 - 00:50 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Psychology

Office for Disparities Research and Workforce Diversity Webinar Series: Cultural Strengths as Protection: Multimodal Findings Using a Community-Engaged Process

NIMH News Feed This webinar will present a conceptual framework for investigating the impact of cultural factors on mental health within American Indian communities. It will also present emerging findings from community-engaged research in this field. Go to Source 31/08/2024 - 00:50 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Psychology

Affirmations can seem cringe. Should you do them anyway?

PsycPORT™: Psychology Newswire Affirmations can change our behaviors and feelings because they’re a form of positive reinforcement. Go to Source 31/08/2024 - 00:50 / Twitter: @hoffeldtcom
Read More
Psychology

What is ketamine and is it effective?

PsycPORT™: Psychology Newswire How ketamine therapy is used to treat depression, what side effects it can have and more. Go to Source 31/08/2024 - 00:50 / Twitter: @hoffeldtcom
Read More
Psychology

How to cultivate the ‘erotic thread’ that helps you stay connected to your romantic partner

PsycPORT™: Psychology Newswire Moments of feeling desired are key to many people’s sexual fantasies, research suggests, and touch can be crucial for couples to maintain a connection. Go to Source 31/08/2024 - 00:48 / Twitter: @hoffeldtcom
Read More
Psychology

Cellphone bans in some states' public schools take effect as experts point out pros and cons

PsycPORT™: Psychology Newswire Arizona, California and Virginia are some states taking action on student cellphone use during the school day. Go to Source 31/08/2024 - 00:48 / Twitter: @hoffeldtcom
Read More
Psychology

What it’s like to have seasonal depression during summer

PsycPORT™: Psychology Newswire For some people longer, sunnier days cause summertime SAD and they find themselves hiding and feeling down. Go to Source 31/08/2024 - 00:48 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Accelerate Generative AI Inference with NVIDIA NIM Microservices on Amazon SageMaker

AWS Machine Learning Blog This post is co-written with Eliuth Triana, Abhishek Sawarkar, Jiahong Liu, Kshitiz Gupta, JR Morgan and Deepika Padmanabhan from NVIDIA.  At the 2024 NVIDIA GTC conference, we announced support for NVIDIA NIM Inference Microservices in Amazon SageMaker Inference. This integration allows you to deploy industry-leading large language models (LLMs) on SageMaker and optimize their performance and cost. The optimized prebuilt containers enable the deployment of state-of-the-art LLMs in minutes instead of days, facilitating their seamless integration into enterprise-grade AI applications. NIM is built on technologies like NVIDIA TensorRT, NVIDIA TensorRT-LLM, and vLLM. NIM is engineered to enable straightforward, secure, and performant AI inferencing on NVIDIA GPU-accelerated instances hosted by SageMaker. This allows developers to take advantage of the power of these advanced models using SageMaker APIs and just a few lines of code, accelerating the deployment of cutting-edge AI capabilities within their applications. NIM, part of the NVIDIA AI Enterprise software platform listed on AWS Marketplace, is a set of inference microservices that bring the power of state-of-the-art LLMs to your applications, providing natural language processing (NLP) and understanding capabilities, whether you’re developing chatbots, summarizing documents, or implementing other NLP-powered applications. You can use pre-built NVIDIA containers to host popular LLMs that are optimized for specific NVIDIA GPUs for quick deployment. Companies like Amgen, A-Alpha Bio, Agilent, and Hippocratic AI are among those using NVIDIA AI on AWS to accelerate computational biology, genomics analysis, and conversational AI. In this post, we provide a walkthrough of how customers can use generative artificial intelligence (AI) models and LLMs using NVIDIA NIM integration with SageMaker. We demonstrate how this integration works and how you can deploy these state-of-the-art models on SageMaker, optimizing their performance and cost. You can use the optimized pre-built NIM containers to deploy LLMs and integrate them...
Read More
Artificial Intelligence

Celebrating the final AWS DeepRacer League championship and road ahead

AWS Machine Learning Blog The AWS DeepRacer League is the world’s first autonomous racing league, open to everyone and powered by machine learning (ML). AWS DeepRacer brings builders together from around the world, creating a community where you learn ML hands-on through friendly autonomous racing competitions. As we celebrate the achievements of over 560,000 participants from more than 150 countries who sharpened their skills through the AWS DeepRacer League over the last 6 years, we also prepare to close this chapter with a final season that serves as both a victory lap and a launching point for what’s next in the world of AWS DeepRacer. The legacy of AWS DeepRacer The AWS DeepRacer community is the heartbeat of the league, where enthusiasts and league legends help foster learning for a global network of AWS DeepRacer participants at any stage of their ML journey. When we launched AWS DeepRacer in 2018, we set out to make ML model training concepts more accessible. By removing common hurdles associated with the preparation of training and evaluating ML models, AWS DeepRacer gives builders a fun way to focus on fundamental training, evaluation, and model performance concepts, all without any prior experience. The impact of racing in the league goes far beyond the podium and prizes, with many participants using their AWS DeepRacer experience and community support to advance their careers. “Embracing the challenges of AWS DeepRacer has not only sharpened my technical skills but has also opened doors to new roles, where innovation and agility are key. Every lap on the track is a step closer to mastering the tools that drive modern solutions, making me ready for the future of technology.” – AWS DeepRacer League veteran Daryl Jezierski, Lead Site Reliability Engineer at The Walt Disney Company. Each year, hundreds of AWS customers...
Read More
Artificial Intelligence

Provide a personalized experience for news readers using Amazon Personalize and Amazon Titan Text Embeddings on Amazon Bedrock

AWS Machine Learning Blog News publishers want to provide a personalized and informative experience to their readers, but the short shelf life of news articles can make this quite difficult. In news publishing, articles typically have peak readership within the same day of publication. Additionally, news publishers frequently publish new articles and want to show these articles to interested readers as quickly as possible. This poses challenges for interaction-based recommender system methodologies such as collaborative filtering and the deep learning-based approaches used in Amazon Personalize, a managed service that can learn user preferences from their past behavior and quickly adjust recommendations to account for changing user behavior in near real time. News publishers typically don’t have the budget or the staff to experiment with in-house algorithms, and need a fully managed solution. In this post, we demonstrate how to provide high-quality recommendations for articles with short shelf lives by using text embeddings in Amazon Bedrock. Amazon Bedrock a fully managed service that offers a choice of high-performing foundation models (FMs) from leading artificial intelligence (AI) companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. Embeddings are a mathematical representation of a piece of information such as a text or an image. Specifically, they are a vector or ordered list of numbers. This representation helps capture the meaning of the image or text in such a way that you can use it to determine how similar images or text are to each other by taking their distance from each other in the embedding space. For our post, we use the Amazon Titan Text Embeddings model. Solution overview By combining the benefits of Amazon Titan Text...
Read More
Artificial Intelligence

Secure RAG applications using prompt engineering on Amazon Bedrock

AWS Machine Learning Blog The proliferation of large language models (LLMs) in enterprise IT environments presents new challenges and opportunities in security, responsible artificial intelligence (AI), privacy, and prompt engineering. The risks associated with LLM use, such as biased outputs, privacy breaches, and security vulnerabilities, must be mitigated. To address these challenges, organizations must proactively ensure that their use of LLMs aligns with the broader principles of responsible AI and that they prioritize security and privacy. When organizations work with LLMs, they should define objectives and implement measures to enhance the security of their LLM deployments, as they do with applicable regulatory compliance. This involves deploying robust authentication mechanisms, encryption protocols, and optimized prompt designs to identify and counteract prompt injection, prompt leaking, and jailbreaking attempts, which can help increase the reliability of AI-generated outputs as it pertains to security. In this post, we discuss existing prompt-level threats and outline several security guardrails for mitigating prompt-level threats. For our example, we work with Anthropic Claude on Amazon Bedrock, implementing prompt templates that allow us to enforce guardrails against common security threats such as prompt injection. These templates are compatible with and can be modified for other LLMs. Introduction to LLMs and Retrieval Augmented Generation LLMs are trained on an unprecedented scale, with some of the largest models comprising billions of parameters and ingesting terabytes of textual data from diverse sources. This massive scale allows LLMs to develop a rich and nuanced understanding of language, capturing subtle nuances, idioms, and contextual cues that were previously challenging for AI systems. To use these models, we can turn to services such as Amazon Bedrock, which provides access to a variety of foundation models from Amazon and third-party providers including Anthropic, Cohere, Meta, and others. You can use Amazon Bedrock to experiment with state-of-the-art...
Read More
Artificial Intelligence

Get the most from Amazon Titan Text Premier

AWS Machine Learning Blog Amazon Titan Text Premier, the latest addition to the Amazon Titan family of large language models (LLMs), is now generally available in Amazon Bedrock. Amazon Titan Text Premier is an advanced, high performance, and cost-effective LLM engineered to deliver superior performance for enterprise-grade text generation applications, including optimized performance for Retrieval Augmented Generation (RAG) and agents. The model is built from the ground up following safe, secure, and trustworthy responsible AI practices and excels in delivering exceptional generative artificial intelligence (AI) text capabilities at scale. Exclusive to Amazon Bedrock, Amazon Titan Text Premier supports a wide range of text-related tasks, including summarization, text generation, classification, question-answering, and information extraction. This new model offers optimized performance for key features such as RAG on Knowledge Bases for Amazon Bedrock and function calling on Agents for Amazon Bedrock. Such integrations enable advanced applications like building interactive AI assistants that use your APIs and interact with your documents. Why choose Amazon Titan Text Premier? As of today, the Amazon Titan family of models for text generation allows for context windows from 4K to 32K and a rich set of capabilities around free text and code generation, API orchestration, RAG, and Agent based applications. An overview of these Amazon Titan models is shown in the following table. Model Availability Context window Languages Functionality Customized fine-tuning Amazon Titan Text Lite GA 4K English Code, rich text Yes Amazon Titan Text Express GA (English) 8K Multilingual (100+ languages) Code, rich text, API orchestration Yes Amazon Titan Text Premier GA 32K English Enterprise text generation applications, RAG, agents Yes (preview) Amazon Titan Text Premier is an LLM designed for enterprise-grade applications. It is optimized for performance and cost-effectiveness, with a maximum context length of 32,000 tokens. Amazon Titan Text Premier enables the development of...
Read More
Artificial Intelligence

GenASL: Generative AI-powered American Sign Language avatars

AWS Machine Learning Blog In today’s world, effective communication is essential for fostering inclusivity and breaking down barriers. However, for individuals who rely on visual communication methods like American Sign Language (ASL), traditional communication tools often fall short. That’s where GenASL comes in. GenASL is a generative artificial intelligence (AI)-powered solution that translates speech or text into expressive ASL avatar animations, bridging the gap between spoken and written language and sign language. The rise of foundation models (FMs), and the fascinating world of generative AI that we live in, is incredibly exciting and opens doors to imagine and build what wasn’t previously possible. AWS makes it possible for organizations of all sizes and developers of all skill levels to build and scale generative AI applications with security, privacy, and responsible AI. In this post, we dive into the architecture and implementation details of GenASL, which uses AWS generative AI capabilities to create human-like ASL avatar videos. Solution overview The GenASL solution comprises several AWS services working together to enable seamless translation from speech or text to ASL avatar animations. Users can input audio, video, or text into GenASL, which generates an ASL avatar video that interprets the provided data. The solution uses AWS AI and machine learning (AI/ML) services, including Amazon Transcribe, Amazon SageMaker, Amazon Bedrock, and FMs. The following diagram shows a high-level overview of the architecture. The workflow includes the following steps: An Amazon Elastic Compute Cloud (Amazon EC2) instance initiates a batch process to create ASL avatars from a video dataset consisting of over 8,000 poses using RTMPose, a real-time multi-person pose estimation toolkit based on MMPose. AWS Amplify distributes the GenASL web app consisting of HTML, JavaScript, and CSS to users’ mobile devices. An Amazon Cognito identity pool grants temporary access to the Amazon Simple Storage...
Read More
Artificial Intelligence

AWS empowers sales teams using generative AI solution built on Amazon Bedrock

AWS Machine Learning Blog At AWS, we are transforming our seller and customer journeys by using generative artificial intelligence (AI) across the sales lifecycle. We envision a future where AI seamlessly integrates into our teams’ workflows, automating repetitive tasks, providing intelligent recommendations, and freeing up time for more strategic, high-value interactions. Our field organization includes customer-facing teams (account managers, solutions architects, specialists) and internal support functions (sales operations). Prospecting, opportunity progression, and customer engagement present exciting opportunities to utilize generative AI, using historical data, to drive efficiency and effectiveness. Personalized content will be generated at every step, and collaboration within account teams will be seamless with a complete, up-to-date view of the customer. Our internal AI sales assistant, powered by Amazon Q Business, will be available across every modality and seamlessly integrate with systems such as internal knowledge bases, customer relationship management (CRM), and more. It will be able to answer questions, generate content, and facilitate bidirectional interactions, all while continuously using internal AWS and external data to deliver timely, personalized insights. Through this series of posts, we share our generative AI journey and use cases, detailing the architecture, AWS services used, lessons learned, and the impact of these solutions on our teams and customers. In this first post, we explore Account Summaries, one of our initial production use cases built on Amazon Bedrock. Account Summaries equips our teams to be better prepared for customer engagements. It combines information from various sources into comprehensive, on-demand summaries available in our CRM or proactively delivered based on upcoming meetings. From the period of September 2023 to March 2024, sellers leveraging GenAI Account Summaries saw a 4.9% increase in value of opportunities created. The business opportunity Data often resides across multiple internal systems, such as CRM and financial tools, and external sources, making...
Read More
Psychology

Office for Disparities Research and Workforce Diversity’s Disability, Equity, and Mental Health Research Webinar Series: Framework for Understanding Structural Ableism in Health Care

NIMH News Feed In this webinar, Dielle Lundberg, M.P.H., and Jessica Chen, Ph.D., will introduce a conceptual framework outlining pathways through which structural ableism in public health and health care may contribute to health inequities for “people who are disabled, neurodivergent, chronically ill, mad, and/or living with mental illness” (Lundberg & Chen, 2023). Go to Source 27/08/2024 - 06:29 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Build private and secure enterprise generative AI applications with Amazon Q Business using IAM Federation

AWS Machine Learning Blog Amazon Q Business is a conversational assistant powered by generative artificial intelligence (AI) that enhances workforce productivity by answering questions and completing tasks based on information in your enterprise systems, which each user is authorized to access. In an earlier post, we discussed how you can build private and secure enterprise generative AI applications with Amazon Q Business and AWS IAM Identity Center. If you want to use Amazon Q Business to build enterprise generative AI applications, and have yet to adopt organization-wide use of AWS IAM Identity Center, you can use Amazon Q Business IAM Federation to directly manage user access to Amazon Q Business applications from your enterprise identity provider (IdP), such as Okta or Ping Identity. Amazon Q Business IAM Federation uses Federation with IAM and doesn’t require the use of IAM Identity Center. AWS recommends using AWS Identity Center if you have a large number of users in order to achieve a seamless user access management experience for multiple Amazon Q Business applications across many AWS accounts in AWS Organizations. You can use federated groups to define access control, and a user is charged only one time for their highest tier of Amazon Q Business subscription. Although Amazon Q Business IAM Federation enables you to build private and secure generative AI applications, without requiring the use of IAM Identity Center, it is relatively constrained with no support for federated groups, and limits the ability to charge a user only one time for their highest tier of Amazon Q Business subscription to Amazon Q Business applications sharing SAML identity provider or OIDC identity provider in a single AWS account. This post shows how you can use Amazon Q Business IAM Federation for user access management of your Amazon Q Business applications. Solution overview...
Read More
Your Self-Story Is a Lie
Psychology

Your Self-Story Is a Lie

Psychology Today: The Latest The stories we tell ourselves aren't entirely true—but that doesn't make them harmful. Go to Source 20/08/2024 - 08:55 /Ross Gormley Twitter: @hoffeldtcom
Read More
Dating Apps Steer You in the Wrong Direction
Psychology

Dating Apps Steer You in the Wrong Direction

Psychology Today: The Latest Personal Perspective: Are you "swiping left" on all the best people? Here's why you might be missing out on meeting more quality partners. Go to Source 20/08/2024 - 08:55 /Lise Deguire Psy.D. Twitter: @hoffeldtcom
Read More
Recent Research Encourages Therapists to Talk About Consent
Psychology

Recent Research Encourages Therapists to Talk About Consent

Psychology Today: The Latest With choking on the rise as a sexual trend with young people, how can therapists teach clients how to verbalize what they desire and remain safe in their intimate relationships? Go to Source 20/08/2024 - 08:55 /Sari Cooper, CST, LCSW Twitter: @hoffeldtcom
Read More
How to Deal With Loneliness When You’re Single
Psychology

How to Deal With Loneliness When You’re Single

Psychology Today: The Latest Personal Perspective: Love and relationships are only one part of your life, not your entire life. Go to Source 20/08/2024 - 08:54 /John Kim LMFT Twitter: @hoffeldtcom
Read More
13 Ways to Be the Best Man You Can Be
Psychology

13 Ways to Be the Best Man You Can Be

Psychology Today: The Latest Masculinity does not have to be toxic. Here are some valuable guidelines, collected from male therapy clients over the years, about how to do it right. Go to Source 20/08/2024 - 08:54 /David B. Wexler Ph.D. Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Cohere Rerank 3 Nimble now generally available on Amazon SageMaker JumpStart

AWS Machine Learning Blog The Cohere Rerank 3 Nimble foundation model (FM) is now generally available in Amazon SageMaker JumpStart. This model is the newest FM in Cohere’s Rerank model series, built to enhance enterprise search and Retrieval Augmented Generation (RAG) systems. In this post, we discuss the benefits and capabilities of this new model with some examples. Overview of Cohere Rerank models Cohere’s Rerank family of models are designed to enhance existing enterprise search systems and RAG systems. Rerank models improve search accuracy over both keyword-based and embedding-based search systems. Cohere Rerank 3 is designed to reorder documents retrieved by initial search algorithms based on their relevance to a given query. A reranking model, also known as a cross-encoder, is a type of model that, given a query and document pair, will output a similarity score. For FMs, words, sentences, or entire documents are often encoded as dense vectors in a semantic space. By calculating the cosine of the angle between these vectors, you can quantify their semantic similarity and output as a single similarity score. You can use this score to reorder the documents by relevance to your query. Cohere Rerank 3 Nimble is the newest model from Cohere’s Rerank family of models, designed to improve speed and efficiency from its predecessor Cohere Rerank 3. According to Cohere’s benchmark tests including BEIR (Benchmarking IR) for accuracy and internal benchmarking datasets, Cohere Rerank 3 Nimble maintains high accuracy while being approximately 3–5 times faster than Cohere Rerank 3. The speed improvement is designed for enterprises looking to enhance their search capabilities without sacrificing performance. The following diagram represents the two-stage retrieval of a RAG pipeline and illustrates where Cohere Rerank 3 Nimble is incorporated into the search pipeline. In the first stage of retrieval in the RAG architecture, a...
Read More
Psychology

Information Session: NIMH Intramural Research Program Training Opportunities (August)

NIMH News Feed Undergraduates, graduate students, medical students, and postdoctoral fellows are invited to learn about training opportunities available in the NIMH Intramural Research Program. Go to Source 20/08/2024 - 08:54 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
How to Navigate Breakup Brain: 5 Tips for Getting Through a Breakup
Psychology

How to Navigate Breakup Brain: 5 Tips for Getting Through a Breakup

Psychology Today: The Latest "Breakup brain" can present challenges like mental fog, anxiety, and emotional turmoil. Here's how to best support yourself through the aftermath of a split. Go to Source 16/08/2024 - 21:32 /Britt Frank MSW, LSCSW, SEP Twitter: @hoffeldtcom
Read More
Think Before You Click: Navigating the Digital Health Maze
Psychology

Think Before You Click: Navigating the Digital Health Maze

Psychology Today: The Latest The internet offers instant access to health info, but beware of misleading data. Prioritize reputable sources and remember that social media advice isn't always reliable. Go to Source 16/08/2024 - 21:32 /Georgia Witkin Ph.D. Twitter: @hoffeldtcom
Read More
Privilege in Caregiving
Psychology

Privilege in Caregiving

Psychology Today: The Latest Personal Perspective: We like to believe that everyone has the same set of choices in life, but that is not true, as my dad's situation helped me realize. Go to Source 16/08/2024 - 21:32 /Kristi Rendahl DPA Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Perform generative AI-powered data prep and no-code ML over any size of data using Amazon SageMaker Canvas

AWS Machine Learning Blog Amazon SageMaker Canvas now empowers enterprises to harness the full potential of their data by enabling support of petabyte-scale datasets. Starting today, you can interactively prepare large datasets, create end-to-end data flows, and invoke automated machine learning (AutoML) experiments on petabytes of data—a substantial leap from the previous 5 GB limit. With over 50 connectors, an intuitive Chat for data prep interface, and petabyte support, SageMaker Canvas provides a scalable, low-code/no-code (LCNC) ML solution for handling real-world, enterprise use cases. Organizations often struggle to extract meaningful insights and value from their ever-growing volume of data. You need data engineering expertise and time to develop the proper scripts and pipelines to wrangle, clean, and transform data. Then you must experiment with numerous models and hyperparameters requiring domain expertise. Afterward, you need to manage complex clusters to process and train your ML models over these large-scale datasets. Starting today, you can prepare your petabyte-scale data and explore many ML models with AutoML by chat and with a few clicks. In this post, we show you how you can complete all these steps with the new integration in SageMaker Canvas with Amazon EMR Serverless without writing code. Solution overview For this post, we use a sample dataset of a 33 GB CSV file containing flight purchase transactions from Expedia between April 16, 2022, and October 5, 2022. We use the features to predict the base fare of a ticket based on the flight date, distance, seat type, and others. In the following sections, we demonstrate how to import and prepare the data, optionally export the data, create a model, and run inference, all in SageMaker Canvas. Prerequisites You can follow along by completing the following prerequisites: Set up SageMaker Canvas. Download the dataset from Kaggle and upload it to...
Read More
How Patience Is the Virtue of Remaining in Difficulty
Psychology

How Patience Is the Virtue of Remaining in Difficulty

Psychology Today: The Latest Patience is active, not passive, and without it, we struggle to endure. Go to Source 16/08/2024 - 21:31 /Sabrina B. Little, Ph.D. Twitter: @hoffeldtcom
Read More
Covid deaths in US lower than earlier peaks amid summer surge
Covid-19

Covid deaths in US lower than earlier peaks amid summer surge

Coronavirus | The Guardian Covid not as deadly in 2023 as it was in prior years, falling from the fourth to 10th leading cause of deathCovid continues surging across the US, but deaths are lower than their peaks earlier in the pandemic due in large part to vaccinations and immunity. Yet the country is still struggling to find its footing on vaccination as the virus settles into a pattern of twice-annual surges.Covid was not as deadly in 2023 as it was in prior years, falling from the fourth to the 10th leading cause of death, according to a study by the US Centers for Disease Control and Prevention (CDC). Deaths overall fell by 6% from 2022 to 2023. Continue reading... Go to Source 16/08/2024 - 21:31 /Melody Schreiber Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Delight your customers with great conversational experiences via QnABot, a generative AI chatbot

AWS Machine Learning Blog QnABot on AWS (an AWS Solution) now provides access to Amazon Bedrock foundational models (FMs) and Knowledge Bases for Amazon Bedrock, a fully managed end-to-end Retrieval Augmented Generation (RAG) workflow. You can now provide contextual information from your private data sources that can be used to create rich, contextual, conversational experiences. The advent of generative artificial intelligence (AI) provides organizations unique opportunities to digitally transform customer experiences. Enterprises with contact center operations are looking to improve customer satisfaction by providing self-service, conversational, interactive chat bots that have natural language understanding (NLU). Enterprises want to automate frequently asked transactional questions, provide a friendly conversational interface, and improve operational efficiency. In turn, customers can ask a variety of questions and receive accurate answers powered by generative AI. In this post, we discuss how to use QnABot on AWS to deploy a fully functional chatbot integrated with other AWS services, and delight your customers with human agent like conversational experiences. Solution overview QnABot on AWS is an AWS Solution that enterprises can use to enable a multi-channel, multi-language chatbot with NLU to improve end customer experiences. QnABot provides a flexible, tiered conversational interface empowering enterprises to meet customers where they are and provide accurate responses. Some responses need to be exact (for example, regulated industries like healthcare or capital markets), some responses need to be searched from large, indexed data sources and cited, and some answers need to be generated on the fly, conversationally, based on semantic context. With QnABot on AWS, you can achieve all of the above by deploying the solution using an AWS CloudFormation template, with no coding required. The solution is extensible, uses AWS AI and machine learning (ML) services, and integrates with multiple channels such as voice, web, and text (SMS). QnABot on AWS...
Read More
Expectations: Are We Asking Too Much of Others?
Psychology

Expectations: Are We Asking Too Much of Others?

Psychology Today: The Latest Aligning expectations with individual abilities fosters healthier relationships and reduces stress. Emphasizing strengths, setting realistic goals, and open communication are key. Go to Source 16/08/2024 - 21:31 /Cara Gardenswartz Ph.D. Twitter: @hoffeldtcom
Read More
UK’s National Crime Agency says it is ‘not scared’ of PPE Medpro’s lawyers
Covid-19

UK’s National Crime Agency says it is ‘not scared’ of PPE Medpro’s lawyers

Coronavirus | The Guardian Agency says long-running investigation into company run by Tory peer Michelle Mone’s husband will be concluded as quickly as possibleThe National Crime Agency has said it is “not scared” of lawyers acting for PPE Medpro, the company led by the Conservative peer Michelle Mone’s husband, Doug Barrowman, and is progressing an investigation into it “as fast as we can”.The NCA is conducting a long-running investigation into suspected criminal offences committed in the procurement by PPE Medpro of £203m of government contracts to supply personal protective equipment during the Covid pandemic. Continue reading... Go to Source 16/08/2024 - 21:31 /Emily Dugan Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Introducing document-level sync reports: Enhanced data sync visibility in Amazon Q Business

AWS Machine Learning Blog Amazon Q Business is a fully managed, generative artificial intelligence (AI)-powered assistant that helps enterprises unlock the value of their data and knowledge. With Amazon Q, you can quickly find answers to questions, generate summaries and content, and complete tasks by using the information and expertise stored across your company’s various data sources and enterprise systems. At the core of this capability are native data source connectors that seamlessly integrate and index content from multiple repositories into a unified index. This enables the Amazon Q large language model (LLM) to provide accurate, well-written answers by drawing from the consolidated data and information. The data source connectors act as a bridge, synchronizing content from disparate systems like Salesforce, Jira, and SharePoint into a centralized index that powers the natural language understanding and generative abilities of Amazon Q. Customers appreciate that Amazon Q Business securely connects to over 40 data sources. While using their data source, they want better visibility into the document processing lifecycle during data source sync jobs. They want to know the status of each document they attempted to crawl and index, as well as the ability to troubleshoot why certain documents were not returned with the expected answers. Additionally, they want access to metadata, timestamps, and access control lists (ACLs) for the indexed documents. We are pleased to announce a new feature now available in Amazon Q Business that significantly improves visibility into data source sync operations. The latest release introduces a comprehensive document-level report incorporated into the sync history, providing administrators with granular indexing status, metadata, and ACL details for every document processed during a data source sync job. This enhancement to sync job observability enables administrators to quickly investigate and resolve ingestion or access issues encountered while setting up an Amazon Q...
Read More
Artificial Intelligence

Derive generative AI-powered insights from ServiceNow with Amazon Q Business

AWS Machine Learning Blog Effective customer support, project management, and knowledge management are critical aspects of providing efficient customer relationship management. ServiceNow is a platform for incident tracking, knowledge management, and project management functions for software projects and has become an indispensable part of many organizations’ workflows to ensure success of the customer and the product. However, extracting valuable insights from the vast amount of data stored in ServiceNow often requires manual effort and building specialized tooling. Users such as support engineers, project managers, and product managers need to be able to ask questions about an incident or a customer, or get answers from knowledge articles in order to provide excellent customer support. Organizations use ServiceNow to manage workflows, such as IT services, ticketing systems, configuration management, and infrastructure changes across IT systems. Generative artificial intelligence (AI) provides the ability to take relevant information from a data source such as ServiceNow and provide well-constructed answers back to the user. Building a generative AI-based conversational application integrated with relevant data sources requires an enterprise to invest time, money, and people. First, you need to build connectors to the data sources. Next, you need to index this data to make it available for a Retrieval Augmented Generation (RAG) approach, where relevant passages are delivered with high accuracy to a large language model (LLM). To do this, you need to select an index that provides the capabilities to index the content for semantic and vector search, build the infrastructure to retrieve and rank the answers, and build a feature-rich web application. Additionally, you need to hire and staff a large team to build, maintain, and manage such a system. Amazon Q Business is a fully managed generative AI-powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on...
Read More
Hypervigilance Around Other People’s Emotions and Needs
Psychology

Hypervigilance Around Other People’s Emotions and Needs

Psychology Today: The Latest Those with a history of people-pleasing behavior often have shaky boundaries where they ignore or downplay their own needs in order to put others’ needs ahead of their own. Go to Source 14/08/2024 - 07:34 /Annie Tanasugarn Ph.D., CCTSA Twitter: @hoffeldtcom
Read More
Guiding Your Teen Through the First Year of High School
Psychology

Guiding Your Teen Through the First Year of High School

Psychology Today: The Latest Are you ready for the rollercoaster that is your teen's first year of high school? Learn the tools you'll need to make this journey smoother for everyone involved. Go to Source 14/08/2024 - 07:34 /Hannah Leib LCSW Twitter: @hoffeldtcom
Read More
6 Practices for Our Rootless Lives
Psychology

6 Practices for Our Rootless Lives

Psychology Today: The Latest Many of us feel rootless and disconnected. Imaginative and playful spiritual-ish practices can help change that. Go to Source 14/08/2024 - 07:34 /Keith S. Cox Ph.D. Twitter: @hoffeldtcom
Read More
Can Financial Psychology Help Me?
Psychology

Can Financial Psychology Help Me?

Psychology Today: The Latest Many of us are currently facing financial stress. What can we do about it? Can therapy help? Go to Source 14/08/2024 - 07:34 /Courtney Crisp Psy.D. Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Intelligent healthcare forms analysis with Amazon Bedrock

AWS Machine Learning Blog Generative artificial intelligence (AI) provides an opportunity for improvements in healthcare by combining and analyzing structured and unstructured data across previously disconnected silos. Generative AI can help raise the bar on efficiency and effectiveness across the full scope of healthcare delivery. The healthcare industry generates and collects a significant amount of unstructured textual data, including clinical documentation such as patient information, medical history, and test results, as well as non-clinical documentation like administrative records. This unstructured data can impact the efficiency and productivity of clinical services, because it’s often found in various paper-based forms that can be difficult to manage and process. Streamlining the handling of this information is crucial for healthcare providers to improve patient care and optimize their operations. Handling large volumes of data, extracting unstructured data from multiple paper forms or images, and comparing it with the standard or reference forms can be a long and arduous process, prone to errors and inefficiencies. However, advancements in generative AI solutions have introduced automated approaches that offer a more efficient and reliable solution for comparing multiple documents. Amazon Bedrock is a fully managed service that makes foundation models (FMs) from leading AI startups and Amazon available through an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case. Amazon Bedrock offers a serverless experience, so you can get started quickly, privately customize FMs with your own data, and quickly integrate and deploy them into your applications using the AWS tools without having to manage the infrastructure. In this post, we explore using the Anthropic Claude 3 on Amazon Bedrock large language model (LLM). Amazon Bedrock provides access to several LLMs, such as Anthropic Claude 3, which can be used to generate semi-structured...
Read More
The Mental Health Benefits of Sports for All Teens
Psychology

The Mental Health Benefits of Sports for All Teens

Psychology Today: The Latest Engaging in too many sports and activities can lead to anxiety and depression in teens while choosing the ideal number can boost self-worth and reduce depression. Go to Source 14/08/2024 - 07:34 /Kimberly Key Ph.D. Twitter: @hoffeldtcom
Read More
Psychology

Office for Disparities Research and Workforce Diversity Webinar Series: Understanding Stigma and Discrimination as Drivers of Mental Health Disparities for Diverse, Rural, LGBTQ+ Communities

NIMH News Feed This webinar will present the goals and procedures of the Rural Engagement and Approaches For LGBTQ+ Mental Health (REALM) study, which is developing a longitudinal cohort of diverse LGBTQ+ adults residing in rural and small metropolitan communities across the United States. Go to Source 14/08/2024 - 07:34 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Harness the power of AI and ML using Splunk and Amazon SageMaker Canvas

AWS Machine Learning Blog As the scale and complexity of data handled by organizations increase, traditional rules-based approaches to analyzing the data alone are no longer viable. Instead, organizations are increasingly looking to take advantage of transformative technologies like machine learning (ML) and artificial intelligence (AI) to deliver innovative products, improve outcomes, and gain operational efficiencies at scale. Furthermore, the democratization of AI and ML through AWS and AWS Partner solutions is accelerating its adoption across all industries. For example, a health-tech company may be looking to improve patient care by predicting the probability that an elderly patient may become hospitalized by analyzing both clinical and non-clinical data. This will allow them to intervene early, personalize the delivery of care, and make the most efficient use of existing resources, such as hospital bed capacity and nursing staff. AWS offers the broadest and deepest set of AI and ML services and supporting infrastructure, such as Amazon SageMaker and Amazon Bedrock, to help you at every stage of your AI/ML adoption journey, including adoption of generative AI. Splunk, an AWS Partner, offers a unified security and observability platform built for speed and scale. As the diversity and volume of data increases, it is vital to understand how they can be harnessed at scale by using complementary capabilities of the two platforms. For organizations looking beyond the use of out-of-the-box Splunk AI/ML features, this post explores how Amazon SageMaker Canvas, a no-code ML development service, can be used in conjunction with data collected in Splunk to drive actionable insights. We also demonstrate how to use the generative AI capabilities of SageMaker Canvas to speed up your data exploration and help you build better ML models. Use case overview In this example, a health-tech company offering remote patient monitoring is collecting operational data from...
Read More
Artificial Intelligence

How Deltek uses Amazon Bedrock for question and answering on government solicitation documents

AWS Machine Learning Blog This post is co-written by Kevin Plexico and Shakun Vohra from Deltek. Question and answering (Q&A) using documents is a commonly used application in various use cases like customer support chatbots, legal research assistants, and healthcare advisors. Retrieval Augmented Generation (RAG) has emerged as a leading method for using the power of large language models (LLMs) to interact with documents in natural language. This post provides an overview of a custom solution developed by the AWS Generative AI Innovation Center (GenAIIC) for Deltek, a globally recognized standard for project-based businesses in both government contracting and professional services. Deltek serves over 30,000 clients with industry-specific software and information solutions. In this collaboration, the AWS GenAIIC team created a RAG-based solution for Deltek to enable Q&A on single and multiple government solicitation documents. The solution uses AWS services including Amazon Textract, Amazon OpenSearch Service, and Amazon Bedrock. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) and LLMs from leading artificial intelligence (AI) companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. Deltek is continuously working on enhancing this solution to better align it with their specific requirements, such as supporting file formats beyond PDF and implementing more cost-effective approaches for their data ingestion pipeline. What is RAG? RAG is a process that optimizes the output of LLMs by allowing them to reference authoritative knowledge bases outside of their training data sources before generating a response. This approach addresses some of the challenges associated with LLMs, such as presenting false, outdated, or generic information, or creating inaccurate responses due to terminology confusion. RAG enables LLMs to generate...
Read More
Psychology

Women who spend time on TikTok feel less satisfied with their bodies

PsycPORT™: Psychology Newswire Study says participants who were exposed to pro-anorexia content felt worse about themselves. Go to Source 14/08/2024 - 07:33 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Cisco achieves 50% latency improvement using Amazon SageMaker Inference faster autoscaling feature

AWS Machine Learning Blog This post is co-authored with Travis Mehlinger and Karthik Raghunathan from Cisco. Webex by Cisco is a leading provider of cloud-based collaboration solutions which includes video meetings, calling, messaging, events, polling, asynchronous video and customer experience solutions like contact center and purpose-built collaboration devices. Webex’s focus on delivering inclusive collaboration experiences fuels our innovation, which leverages AI and Machine Learning, to remove the barriers of geography, language, personality, and familiarity with technology. Its solutions are underpinned with security and privacy by design. Webex works with the world’s leading business and productivity apps – including AWS. Cisco’s Webex AI (WxAI) team plays a crucial role in enhancing these products with AI-driven features and functionalities, leveraging LLMs to improve user productivity and experiences. In the past year, the team has increasingly focused on building artificial intelligence (AI) capabilities powered by large language models (LLMs) to improve productivity and experience for users. Notably, the team’s work extends to Webex Contact Center, a cloud-based omni-channel contact center solution that empowers organizations to deliver exceptional customer experiences. By integrating LLMs, WxAI team enables advanced capabilities such as intelligent virtual assistants, natural language processing, and sentiment analysis, allowing Webex Contact Center to provide more personalized and efficient customer support. However, as these LLM models grew to contain hundreds of gigabytes of data, WxAI team faced challenges in efficiently allocating resources and starting applications with the embedded models. To optimize its AI/ML infrastructure, Cisco migrated its LLMs to Amazon SageMaker Inference, improving speed, scalability, and price-performance. This blog post highlights how Cisco implemented faster autoscaling release reference. For more details on Cisco’s Use Cases, Solution & Benefits see How Cisco accelerated the use of generative AI with Amazon SageMaker Inference. In this post, we will discuss the following: Overview of Cisco’s use-case and architecture Introduce new faster...
Read More
Psychology

Combat brain fatigue

PsycPORT™: Psychology Newswire A new study finds that people see a downside to "putting on their thinking cap". Go to Source 14/08/2024 - 07:33 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

How Cisco accelerated the use of generative AI with Amazon SageMaker Inference

AWS Machine Learning Blog This post is co-authored with Travis Mehlinger and Karthik Raghunathan from Cisco. Webex by Cisco is a leading provider of cloud-based collaboration solutions, including video meetings, calling, messaging, events, polling, asynchronous video, and customer experience solutions like contact center and purpose-built collaboration devices. Webex’s focus on delivering inclusive collaboration experiences fuels their innovation, which uses artificial intelligence (AI) and machine learning (ML), to remove the barriers of geography, language, personality, and familiarity with technology. Its solutions are underpinned with security and privacy by design. Webex works with the world’s leading business and productivity apps—including AWS. Cisco’s Webex AI (WxAI) team plays a crucial role in enhancing these products with AI-driven features and functionalities, using large language models (LLMs) to improve user productivity and experiences. In the past year, the team has increasingly focused on building AI capabilities powered by LLMs to improve productivity and experience for users. Notably, the team’s work extends to Webex Contact Center, a cloud-based omni-channel contact center solution that empowers organizations to deliver exceptional customer experiences. By integrating LLMs, the WxAI team enables advanced capabilities such as intelligent virtual assistants, natural language processing (NLP), and sentiment analysis, allowing Webex Contact Center to provide more personalized and efficient customer support. However, as these LLM models grew to contain hundreds of gigabytes of data, the WxAI team faced challenges in efficiently allocating resources and starting applications with the embedded models. To optimize its AI/ML infrastructure, Cisco migrated its LLMs to Amazon SageMaker Inference, improving speed, scalability, and price-performance. This post highlights how Cisco implemented new functionalities and migrated existing workloads to Amazon SageMaker inference components for their industry-specific contact center use cases. By integrating generative AI, they can now analyze call transcripts to better understand customer pain points and improve agent productivity. Cisco has...
Read More
Psychology

People experiencing colorism say health system fails them

PsycPORT™: Psychology Newswire Clinicians from various ethnic groups have recently begun to draw a direct line between colorism and poor health. Go to Source 14/08/2024 - 07:33 / Twitter: @hoffeldtcom
Read More
Psychology

People are surprisingly reluctant to reach out to old friends

PsycPORT™: Psychology Newswire Researchers found that the biggest barrier to reaching out was fear the friend wouldn't want to hear from them. Go to Source 14/08/2024 - 07:32 / Twitter: @hoffeldtcom
Read More
Business News

Keppel unit enters JV to manage cooling sector in Thailand

The Straits Times Business News Since launching energy-as-a-service business in late 2021, Keppel has signed several deals. Go to Source 14/08/2024 - 07:32 / Twitter: @hoffeldtcom
Read More
Long Covid health issues persist in those hospitalised early in pandemic, study finds
Covid-19

Long Covid health issues persist in those hospitalised early in pandemic, study finds

Coronavirus | The Guardian Substantial proportion have cognitive and mental health problems years after infection, with some symptoms worseningHealth problems and brain fog can persist for years in people hospitalised by Covid early in the pandemic, with some patients developing more severe and even new symptoms after 12 months, researchers say.They found that while many people with long Covid improved over time, a substantial proportion still had cognitive problems two to three years later and saw symptoms of depression, anxiety and fatigue worsen rather than subside. Continue reading... Go to Source 14/08/2024 - 07:32 /Ian Sample Science editor Twitter: @hoffeldtcom
Read More
Why Your Therapist Won’t Just Tell You What to Do
Psychology

Why Your Therapist Won’t Just Tell You What to Do

Psychology Today: The Latest Therapists and coaches purposefully avoid giving advice as no one can fully know your lived experience. Therapy is about your identified needs, not your therapist's opinions. Go to Source 26/07/2024 - 11:37 /Julie Radico Psy.D. ABPP Twitter: @hoffeldtcom
Read More
Lessons From the Astronauts
Psychology

Lessons From the Astronauts

Psychology Today: The Latest The Overview Effect, experienced by astronauts viewing Earth from space, can transform mental health by fostering a sense of interconnectedness and compassion. Go to Source 26/07/2024 - 11:37 /Sarah Abedi M.D. Twitter: @hoffeldtcom
Read More
A New Questionnaire Measures Gaslighting Victimization
Psychology

A New Questionnaire Measures Gaslighting Victimization

Psychology Today: The Latest Researchers have developed a new questionnaire to measure gaslighting in a relationship. Go to Source 26/07/2024 - 11:37 /Gwendolyn Seidman Ph.D. Twitter: @hoffeldtcom
Read More
Lying While Telling the Truth
Psychology

Lying While Telling the Truth

Psychology Today: The Latest Do you sometimes feel that people are misleading you with facts? Learn how to optimize your relationships by raising the standard for honesty and trust. Go to Source 26/07/2024 - 11:37 /Daniel S. Lobel Ph.D. Twitter: @hoffeldtcom
Read More
Crippling Realities for Today’s Kids: How Caring Adults Can Help
Psychology

Crippling Realities for Today’s Kids: How Caring Adults Can Help

Psychology Today: The Latest If we’re not careful, 21st-century culture can damage our kids. Consider these intentional action steps. Go to Source 26/07/2024 - 11:37 /Tim Elmore Twitter: @hoffeldtcom
Read More
Psychology

Office for Disparities Research and Workforce Diversity’s Disability, Equity, and Mental Health Research Webinar Series: Transforming Mental Health Disability Research Through Lived Experience Leadership and Co-Production

NIMH News Feed This webinar will introduce a range of approaches to meaningfully integrate individuals with lived experiences of psychiatric disabilities into mental health research. Go to Source 26/07/2024 - 11:37 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Amazon SageMaker inference launches faster auto scaling for generative AI models

AWS Machine Learning Blog Today, we are excited to announce a new capability in Amazon SageMaker inference that can help you reduce the time it takes for your generative artificial intelligence (AI) models to scale automatically. You can now use sub-minute metrics and significantly reduce overall scaling latency for generative AI models. With this enhancement, you can improve the responsiveness of your generative AI applications as demand fluctuates. The rise of foundation models (FMs) and large language models (LLMs) has brought new challenges to generative AI inference deployment. These advanced models often take seconds to process, while sometimes handling only a limited number of concurrent requests. This creates a critical need for rapid detection and auto scaling to maintain business continuity. Organizations implementing generative AI seek comprehensive solutions that address multiple concerns: reducing infrastructure costs, minimizing latency, and maximizing throughput to meet the demands of these sophisticated models. However, they prefer to focus on solving business problems rather than doing the undifferentiated heavy lifting to build complex inference platforms from the ground up. SageMaker provides industry-leading capabilities to address these inference challenges. It offers endpoints for generative AI inference that reduce FM deployment costs by 50% on average and latency by 20% on average by optimizing the use of accelerators. The SageMaker inference optimization toolkit, a fully managed model optimization feature in SageMaker, can deliver up to two times higher throughput while reducing costs by approximately 50% for generative AI performance on SageMaker. Besides optimization, SageMaker inference also provides streaming support for LLMs, enabling you to stream tokens in real time rather than waiting for the entire response. This allows for lower perceived latency and more responsive generative AI experiences, which are crucial for use cases like conversational AI assistants. Lastly, SageMaker inference provides the ability to deploy a single...
Read More
Artificial Intelligence

Find answers accurately and quickly using Amazon Q Business with the SharePoint Online connector

AWS Machine Learning Blog Amazon Q Business is a fully managed, generative artificial intelligence (AI)-powered assistant that helps enterprises unlock the value of their data and knowledge. With Amazon Q, you can quickly find answers to questions, generate summaries and content, and complete tasks by using the information and expertise stored across your company’s various data sources and enterprise systems. At the core of this capability are native data source connectors that seamlessly integrate and index content from multiple repositories into a unified index. This enables the Amazon Q large language model (LLM) to provide accurate, well-written answers by drawing from the consolidated data and information. The data source connectors act as a bridge, synchronizing content from disparate systems like Salesforce, Jira, and SharePoint into a centralized index that powers the natural language understanding and generative abilities of Amazon Q. To make this integration process as seamless as possible, Amazon Q Business offers multiple pre-built connectors to a wide range of data sources, including Atlassian Jira, Atlassian Confluence, Amazon Simple Storage Service (Amazon S3), Microsoft SharePoint, Salesforce, and many more. This allows you to create your generative AI solution with minimal configuration. For a full list of Amazon Q supported data source connectors, see Supported connectors. One of the key integrations for Amazon Q is with Microsoft SharePoint Online. SharePoint is a widely used collaborative platform that allows organizations to manage and share content, knowledge, and applications to improve productivity and decision-making. By integrating Amazon Q with SharePoint, businesses can empower their employees to access information and insights from SharePoint more efficiently and effectively. With the Amazon Q and SharePoint Online integration, business users can do the following: Get instant answers – Users can ask natural language questions and Amazon Q will provide accurate, up-to-date answers by searching and synthesizing...
Read More
Psychology

Episode 3: Jane the Brain and the Upset Reset

NIMH News Feed Hello kids, meet Jane the Brain! In this fun and colorful video series from the National Institute of Mental Health (NIMH), Jane, our super-smart and friendly animated character, helps kids understand big feelings like stress, frustration, and sadness. Join Jane as she explores ways to handle these emotions with relatable situations and helpful tips and coping skills. Go to Source 26/07/2024 - 11:36 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Evaluate conversational AI agents with Amazon Bedrock

AWS Machine Learning Blog As conversational artificial intelligence (AI) agents gain traction across industries, providing reliability and consistency is crucial for delivering seamless and trustworthy user experiences. However, the dynamic and conversational nature of these interactions makes traditional testing and evaluation methods challenging. Conversational AI agents also encompass multiple layers, from Retrieval Augmented Generation (RAG) to function-calling mechanisms that interact with external knowledge sources and tools. Although existing large language model (LLM) benchmarks like MT-bench evaluate model capabilities, they lack the ability to validate the application layers. The following are some common pain points in developing conversational AI agents: Testing an agent is often tedious and repetitive, requiring a human in the loop to validate the semantics meaning of the responses from the agent, as shown in the following figure. Setting up proper test cases and automating the evaluation process can be difficult due to the conversational and dynamic nature of agent interactions. Debugging and tracing how conversational AI agents route to the appropriate action or retrieve the desired results can be complex, especially when integrating with external knowledge sources and tools. Agent Evaluation, an open source solution using LLMs on Amazon Bedrock, addresses this gap by enabling comprehensive evaluation and validation of conversational AI agents at scale. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. Agent Evaluation provides the following: Built-in support for popular services, including Agents for Amazon Bedrock, Knowledge Bases for Amazon Bedrock, Amazon Q Business, and Amazon SageMaker endpoints Orchestration of concurrent, multi-turn conversations with your agent while evaluating its responses Configurable...
Read More
Psychology

Episode 2: Jane the Brain and the Frustration Sensation

NIMH News Feed Hello kids, meet Jane the Brain! In this fun and colorful video series from the National Institute of Mental Health (NIMH), Jane, our super-smart and friendly animated character, helps kids understand big feelings like stress, frustration, and sadness. Join Jane as she explores ways to handle these emotions with relatable situations and helpful tips and coping skills. Go to Source 26/07/2024 - 11:36 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Node problem detection and recovery for AWS Neuron nodes within Amazon EKS clusters

AWS Machine Learning Blog Implementing hardware resiliency in your training infrastructure is crucial to mitigating risks and enabling uninterrupted model training. By implementing features such as proactive health monitoring and automated recovery mechanisms, organizations can create a fault-tolerant environment capable of handling hardware failures or other issues without compromising the integrity of the training process. In the post, we introduce the AWS Neuron node problem detector and recovery DaemonSet for AWS Trainium and AWS Inferentia on Amazon Elastic Kubernetes Service (Amazon EKS). This component can quickly detect rare occurrences of issues when Neuron devices fail by tailing monitoring logs. It marks the worker nodes in a defective Neuron device as unhealthy, and promptly replaces them with new worker nodes. By accelerating the speed of issue detection and remediation, it increases the reliability of your ML training and reduces the wasted time and cost due to hardware failure. This solution is applicable if you’re using managed nodes or self-managed node groups (which use Amazon EC2 Auto Scaling groups) on Amazon EKS. At the time of writing this post, automatic recovery of nodes provisioned by Karpenter is not yet supported. Solution overview The solution is based on the node problem detector and recovery DaemonSet, a powerful tool designed to automatically detect and report various node-level problems in a Kubernetes cluster. The node problem detector component will continuously monitor the kernel message (kmsg) logs on the worker nodes. If it detects error messages specifically related to the Neuron device (which is the Trainium or AWS Inferentia chip), it will change NodeCondition to NeuronHasError on the Kubernetes API server. The node recovery agent is a separate component that periodically checks the Prometheus metrics exposed by the node problem detector. When it finds a node condition indicating an issue with the Neuron device, it will...
Read More
Psychology

Episode 1: Jane the Brain and the Stress Mess

NIMH News Feed Jane has a big test coming up, and did we mention a science fair project too?? Learn more about how stress affects the brain and join Jane as she learns important skills like box breathing to help her manage stress. Go to Source 26/07/2024 - 11:36 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Mistral Large 2 is now available in Amazon Bedrock

AWS Machine Learning Blog Mistral AI’s Mistral Large 2 (24.07) foundation model (FM) is now generally available in Amazon Bedrock. Mistral Large 2 is the newest version of Mistral Large, and according to Mistral AI offers significant improvements across multilingual capabilities, math, reasoning, coding, and much more. In this post, we discuss the benefits and capabilities of this new model with some examples. Overview of Mistral Large 2 Mistral Large 2 is an advanced large language model (LLM) with state-of-the-art reasoning, knowledge, and coding capabilities according to Mistral AI. It is multi-lingual by design, supporting dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, Polish, Arabic, and Hindi. Per Mistral AI, a significant effort was also devoted to enhancing the model’s reasoning capabilities. One of the key focuses during training was to minimize the model’s tendency to hallucinate, or generate plausible-sounding but factually incorrect or irrelevant information. This was achieved by fine-tuning the model to be more cautious and discerning in its responses, making sure it provides reliable and accurate outputs. Additionally, the new Mistral Large 2 is trained to acknowledge when it can’t find solutions or doesn’t have sufficient information to provide a confident answer. According to Mistral AI, the model is also proficient in coding, trained on over 80 programming languages such as Python, Java, C, C++, JavaScript, Bash, Swift, and Fortran. With its best-in-class agentic capabilities, it can natively call functions and output JSON, enabling seamless interaction with external systems, APIs, and tools. Additionally, Mistral Large 2 (24.07) boasts advanced reasoning and mathematical capabilities, making it a powerful asset for tackling complex logical and computational challenges. Mistral Large 2 also offers an increased context window of 128,000 tokens. At the time of writing, the model (mistral.mistral-large-2407-v1:0) is available in the us-west-2 AWS...
Read More
Psychology

Youth With Conduct Disorder Show Widespread Differences in Brain Structure

NIMH News Feed The largest neuroimaging study of conduct disorder to date, with funding from NIH, has revealed extensive changes in brain structure among young people with the disorder. The largest difference was a smaller area of the brain’s outer layer, known as the cerebral cortex, which is critical for many aspects of behavior, cognition and emotion. Go to Source 26/07/2024 - 11:35 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Psychology

Score for predicting dementia risk also may predict depression

PsycPORT™: Psychology Newswire The Brain Care Score is a tool for assessing dementia or stroke risk without medical procedures. Go to Source 26/07/2024 - 11:35 / Twitter: @hoffeldtcom
Read More
Vulnerable people with Covid struggling to access treatments in England, experts warn
Covid-19

Vulnerable people with Covid struggling to access treatments in England, experts warn

Coronavirus | The Guardian Responsibility for prescriptions moving to 42 integrated care boards has led to patients having to work out how to get treatment, often when illPeople at higher risk from Covid are struggling to get timely access to treatments such as antiviral drugs, charities, patients and doctors have warned amid a summer wave of the virus.People with certain health conditions or who meet other specific criteria are eligible for medications that can help the body fight the virus that causes Covid. They include those 85 years or older or who have Down’s syndrome, an organ transplant, a weakened immune system, lung cancer or sickle cell disease. Continue reading... Go to Source 26/07/2024 - 11:35 /Nicola Davis Science correspondent Twitter: @hoffeldtcom
Read More
Psychology

The pandemic caused an increase in teen eating disorders

PsycPORT™: Psychology Newswire Findings show increases in adolescents and young adults seeking both inpatient and outpatient care for an eating disorder in the aftermath of COVID-19. Go to Source 26/07/2024 - 11:35 / Twitter: @hoffeldtcom
Read More
Management

Nine Greater Cincinnati money management firms experienced acquisitions this year

Human Resources News - Human Resources News Headlines | Bizjournals.com In total, nine Greater Cincinnati money management firms either acquired – or were acquired by – another company from January to April this year. Go to Source 22/07/2024 - 13:43 /Isabella Ferrentino Twitter: @hoffeldtcom
Read More
Management

Atlanta private aviation company Volato Group hires president of key unit

Human Resources News - Human Resources News Headlines | Bizjournals.com Mark Ozenick will lead Volato Aircraft Management Services. Go to Source 19/07/2024 - 12:19 /Chris Fuhrmeister Twitter: @hoffeldtcom
Read More
Management

Lawsuit accusing ex-Centene execs of fraud dismissed

Human Resources News - Human Resources News Headlines | Bizjournals.com A Delaware judge dismissed a lawsuit alleging that a majority of Centene Corp.’s former Board of Directors breached their fiduciary duties by failing to oversee pharmacy benefit management (PBM) practices. Go to Source 17/07/2024 - 23:19 /James Drew Twitter: @hoffeldtcom
Read More
Management

Explore St. Louis fills new executive post ahead of president’s exit

Human Resources News - Human Resources News Headlines | Bizjournals.com The tourism and convention agency has hired a new C-suite executive to guide its sales, marketing and revenue management as it launches a search for its next president. Go to Source 17/07/2024 - 23:19 /Diana Barr Twitter: @hoffeldtcom
Read More
Management

Coleman Team to take over as president of Front Street Capital, Robin Team to become chairman

Human Resources News - Human Resources News Headlines | Bizjournals.com Robin Team, who founded the Winston-Salem commercial real estate development, investment and management firm in 1984, will become chairman of the board of directors and remain involved in the company's future. See who will take over daily leadership as the firm's managing partner. Go to Source 15/07/2024 - 18:53 /Elizabeth 'Lilly' Egan Twitter: @hoffeldtcom
Read More
Management

Big general contractor names new president

Human Resources News - Human Resources News Headlines | Bizjournals.com One of the St. Louis region's largest general contractors has named a new president who brings the company over 35 years of operations and management experience in the architecture, engineering and construction industry. Go to Source 13/07/2024 - 13:18 /Diana Barr Twitter: @hoffeldtcom
Read More
Management

Fenton-based marketing giant makes acquisition

Human Resources News - Human Resources News Headlines | Bizjournals.com The Fenton-based provider of events management and incentive programs has expanded with an acquisition. Go to Source 09/07/2024 - 00:13 /Diana Barr Twitter: @hoffeldtcom
Read More
Covid-19

Downtown Eastside could lose 2 public toilets as funding dries up

The city first installed the facilities, both operated by the Overdose Prevention Society, as a temporary response to the growing homelessness crisis and the COVID-19 pandemic.  Go to Source 02/07/2024 - 03:30 /Simon Little Twitter: @hoffeldtcom
Read More
Business News

Apple, Tesla lifts stocks to higher close in light pre-holiday trading

The Straits Times Business News NEW YORK - Megacap growth stocks led by Apple and Tesla lifted the tech-heavy Nasdaq to a higher close on July 1, while the Dow and the S&P 500 also eked out slight gains in light pre-holiday trading. Go to Source 02/07/2024 - 00:13 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Build a self-service digital assistant using Amazon Lex and Knowledge Bases for Amazon Bedrock

AWS Machine Learning Blog Organizations strive to implement efficient, scalable, cost-effective, and automated customer support solutions without compromising the customer experience. Generative artificial intelligence (AI)-powered chatbots play a crucial role in delivering human-like interactions by providing responses from a knowledge base without the involvement of live agents. These chatbots can be efficiently utilized for handling generic inquiries, freeing up live agents to focus on more complex tasks. Amazon Lex provides advanced conversational interfaces using voice and text channels. It features natural language understanding capabilities to recognize more accurate identification of user intent and fulfills the user intent faster. Amazon Bedrock simplifies the process of developing and scaling generative AI applications powered by large language models (LLMs) and other foundation models (FMs). It offers access to a diverse range of FMs from leading providers such as Anthropic Claude, AI21 Labs, Cohere, and Stability AI, as well as Amazon’s proprietary Amazon Titan models. Additionally, Knowledge Bases for Amazon Bedrock empowers you to develop applications that harness the power of Retrieval Augmented Generation (RAG), an approach where retrieving relevant information from data sources enhances the model’s ability to generate contextually appropriate and informed responses. The generative AI capability of QnAIntent in Amazon Lex lets you securely connect FMs to company data for RAG. QnAIntent provides an interface to use enterprise data and FMs on Amazon Bedrock to generate relevant, accurate, and contextual responses. You can use QnAIntent with new or existing Amazon Lex bots to automate FAQs through text and voice channels, such as Amazon Connect. With this capability, you no longer need to create variations of intents, sample utterances, slots, and prompts to predict and handle a wide range of FAQs. You can simply connect QnAIntent to company knowledge sources and the bot can immediately handle questions using the allowed content. In...
Read More
Artificial Intelligence

Identify idle endpoints in Amazon SageMaker

AWS Machine Learning Blog Amazon SageMaker is a machine learning (ML) platform designed to simplify the process of building, training, deploying, and managing ML models at scale. With a comprehensive suite of tools and services, SageMaker offers developers and data scientists the resources they need to accelerate the development and deployment of ML solutions. In today’s fast-paced technological landscape, efficiency and agility are essential for businesses and developers striving to innovate. AWS plays a critical role in enabling this innovation by providing a range of services that abstract away the complexities of infrastructure management. By handling tasks such as provisioning, scaling, and managing resources, AWS allows developers to focus more on their core business logic and iterate quickly on new ideas. As developers deploy and scale applications, unused resources such as idle SageMaker endpoints can accumulate unnoticed, leading to higher operational costs. This post addresses the issue of identifying and managing idle endpoints in SageMaker. We explore methods to monitor SageMaker endpoints effectively and distinguish between active and idle ones. Additionally, we walk through a Python script that automates the identification of idle endpoints using Amazon CloudWatch metrics. Identify idle endpoints with a Python script To effectively manage SageMaker endpoints and optimize resource utilization, we use a Python script that uses the AWS SDK for Python (Boto3) to interact with SageMaker and CloudWatch. This script automates the process of querying CloudWatch metrics to determine endpoint activity and identifies idle endpoints based on the number of invocations over a specified time period. Let’s break down the key components of the Python script and explain how each part contributes to the identification of idle endpoints: Global variables and AWS client initialization – The script begins by importing necessary modules and initializing global variables such as NAMESPACE, METRIC, LOOKBACK, and PERIOD. These variables...
Read More
Artificial Intelligence

Indian language RAG with Cohere multilingual embeddings and Anthropic Claude 3 on Amazon Bedrock

AWS Machine Learning Blog Media and entertainment companies serve multilingual audiences with a wide range of content catering to diverse audience segments. These enterprises have access to massive amounts of data collected over their many years of operations. Much of this data is unstructured text and images. Conventional approaches to analyzing unstructured data for generating new content rely on the use of keyword or synonym matching. These approaches don’t capture the full semantic context of a document, making them less effective for users’ search, content creation, and several other downstream tasks. Text embeddings use machine learning (ML) capabilities to capture the essence of unstructured data. These embeddings are generated by language models that map natural language text into their numerical representations and, in the process, encode contextual information in the natural language document. Generating text embeddings is the first step to many natural language processing (NLP) applications powered by large language models (LLMs) such as Retrieval Augmented Generation (RAG), text generation, entity extraction, and several other downstream business processes. Converting text to embeddings using cohere multilingual embedding model Despite the rising popularity and capabilities of LLMs, the language most often used to converse with the LLM, often through a chat-like interface, is English. And although progress has been made in adapting open source models to comprehend and respond in Indian languages, such efforts fall short of the English language capabilities displayed among larger, state-of-the-art LLMs. This makes it difficult to adopt such models for RAG applications based on Indian languages. In this post, we showcase a RAG application that can search and query across multiple Indian languages using the Cohere Embed – Multilingual model and Anthropic Claude 3 on Amazon Bedrock. This post focuses on Indian languages, but you can use the approach with other languages that are supported by...
Read More
Management

Using AI in Performance Management: The Keys for HR Success

15Five AI isn’t coming for your job, but it might help tons of people at your organization do better at theirs. How? As more and more HR tools adopt this technology, AI-powered performance management will soon become the norm. In a nutshell, performance management covers the practices and processes you use to track each employee’s performance, build realistic goals for them, and help them along their journey as they grow with your organization. It’s an essential part of managing your workforce, but it’s also labor-intensive. Here’s where AI comes in. What is AI-powered performance management? AI-powered performance management uses AI to enhance the way you empower employees to hit their goals. In some cases, that’ll mean using specialized tools that add AI functionality to processes you’re already using, like 15Five’s Spark AI assistant for HR leaders. In others, it’ll mean using AI tools that aren’t necessarily linked to HR processes (e.g. a chatbot like ChatGPT) in ways that support your performance management efforts. If some part of your performance management process is automated, you’re likely using AI, even if you don’t realize it. Let’s explore that further. Performance management using AI: key applications While there are some obvious applications of HR tools in performance management—like asking ChatGPT to write parts of your performance reviews—there are a lot of other ways they can be used. Finding top performers and struggling employees: As they work, your employees create tons of data. Data about how they meet their deadlines, who they’re collaborating with, and how much time they put towards specific tasks. AI tools can crawl through all of that data, turning it into insights managers can use to identify their top performers and help those who need it most. Analyzing employee strengths and weaknesses: Beyond just finding your top performers, AI can...
Read More
Business News

Chevron doctrine overturned: Republicans, big business praise Supreme Court decision

US Top News and Analysis The Supreme Court ruling overturned the case known as Chevron v. Natural Resources Defense Council, reducing the authority of federal regulatory agencies. Go to Source 28/06/2024 - 18:49 / Twitter: @hoffeldtcom
Read More
Management

How To Create a Workplace Culture That Truly Embraces Diversity

15Five Diversity is strength. Period. The numbers show that diverse teams perform better than their more homogeneous counterparts. But just like any other objective, there are right and wrong ways to build a workplace culture that promotes diversity. I’m Andrew Adeniyi, CEO of AAA Solutions and author of The Circle of Leadership: A Framework for Creating & Leveraging Culture. My mission is to empower leaders to build a culture that works for everyone, and I recently talked about this on the HR Superstars podcast. Company culture starts with leadership While a leader’s first instinct is often to empower their team to build a culture of diversity from the ground up in their daily actions, that’s not how you build culture. Author John Maxwell said it best in The 360 Degree Leader: “Everything rises and falls on leadership.” I’ve seen this as well in my work. In organizations with poor culture, you can typically point at leadership—or a lack thereof—as the main cause Employee engagement is essential to building and maintaining a company culture rooted in diversity and belonging, and companies with lackluster leadership generally have lower engagement. As a leader looking to build that culture, that starts with a statement about how committed you are to promoting diversity, equity, and inclusion while acknowledging that you don’t have it all figured out yet—but you’re going to keep trying. Feedback is a big part of this too, and leaders need to be ready to receive it. When Stefan Larson, former CEO of Ralph Lauren Polo, led Old Navy decades ago, he turned things around by going back to its core values. The biggest one? Innovation, a big part of why Old Navy was founded, wasn’t the norm when he became CEO. To make innovation the norm again, Larson incentivized open sharing through...
Read More
Management

Diverse perspectives: A business imperative — Table of Experts

Human Resources News - Human Resources News Headlines | Bizjournals.com In this panel discussion, Achieve Vice President of DEI and Learning and Development Henri’ Dawes discusses DEI in the workplace with Achieve professionals Andy Wadhwa, vice president of Product Management; Linda Luman, executive vice president of Human Resources; Heather Marcom, vice president of Talent Acquisition; and Jamal Williams, senior manager of New Client Enrollment for Sales. Henri’ Dawes: Hello everyone, and welcome to our panel discussion hosted by Achieve. My name is Henri’ Dawes,… Go to Source 28/06/2024 - 09:23 /Achieve Twitter: @hoffeldtcom
Read More
Psychology

Placebo Workshop: Translational Research Domains and Key Questions

NIMH News Feed The National Institute of Mental Health (NIMH) will host a virtual workshop on the placebo effect. The purpose of this workshop is to bring together experts in neurobiology, clinical trials, and regulatory science to examine placebo effects in drug, device, and psychosocial interventions for mental health conditions. Go to Source 28/06/2024 - 06:10 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Artificial Intelligence

The future of productivity agents with NinjaTech AI and AWS Trainium

AWS Machine Learning Blog This is a guest post by Arash Sadrieh, Tahir Azim, and Tengfui Xue from NinjaTech AI. NinjaTech AI’s mission is to make everyone more productive by taking care of time-consuming complex tasks with fast and affordable artificial intelligence (AI) agents. We recently launched MyNinja.ai, one of the world’s first multi-agent personal AI assistants, to drive towards our mission. MyNinja.ai is built from the ground up using specialized agents that are capable of completing tasks on your behalf, including scheduling meetings, conducting deep research from the web, generating code, and helping with writing. These agents can break down complicated, multi-step tasks into branched solutions, and are capable of evaluating the generated solutions dynamically while continually learning from past experiences. All of these tasks are accomplished in a fully autonomous and asynchronous manner, freeing you up to continue your day while Ninja works on these tasks in the background, and engaging when your input is required. Because no single large language model (LLM) is perfect for every task, we knew that building a personal AI assistant would require multiple LLMs optimized specifically for a variety of tasks. In order to deliver the accuracy and capabilities to delight our users, we also knew that we would require these multiple models to work together in tandem. Finally, we needed scalable and cost-effective methods for training these various models—an undertaking that has historically been costly to pursue for most startups. In this post, we describe how we built our cutting-edge productivity agent NinjaLLM, the backbone of MyNinja.ai, using AWS Trainium chips. Building a dataset We recognized early that to deliver on the mission of tackling tasks on a user’s behalf, we needed multiple models that were optimized for specific tasks. Examples include our Deep Researcher, Deep Coder, and Advisor models. After...
Read More
Artificial Intelligence

Build generative AI applications on Amazon Bedrock — the secure, compliant, and responsible foundation

AWS Machine Learning Blog Generative AI has revolutionized industries by creating content, from text and images to audio and code. Although it can unlock numerous possibilities, integrating generative AI into applications demands meticulous planning. Amazon Bedrock is a fully managed service that provides access to large language models (LLMs) and other foundation models (FMs) from leading AI companies through a single API. It provides a broad set of tools and capabilities to help build generative AI applications. Starting today, I’ll be writing a blog series to highlight some of the key factors driving customers to choose Amazon Bedrock. One of the most important reason is that Bedrock enables customers to build a secure, compliant, and responsible foundation for generative AI applications. In this post, I explore how Amazon Bedrock helps address security and privacy concerns, enables secure model customization, accelerates auditability and incident response, and fosters trust through transparency and responsible AI. Plus, I’ll showcase real-world examples of companies building secure generative AI applications on Amazon Bedrock—demonstrating its practical applications across different industries. Listening to what our customers are saying During the past year, my colleague Jeff Barr, VP & Chief Evangelist at AWS, and I have had the opportunity to speak with numerous customers about generative AI. They mention compelling reasons for choosing Amazon Bedrock to build and scale their transformative generative AI applications. Jeff’s video highlights some of the key factors driving customers to choose Amazon Bedrock today. As you build and operationalize generative AI, it’s important not to lose sight of critically important elements—security, compliance, and responsible AI—particularly for use cases involving sensitive data. The OWASP Top 10 For LLMs outlines the most common vulnerabilities, but addressing these may require additional efforts including stringent access controls, data encryption, preventing prompt injection attacks, and compliance with policies. You want...
Read More
Business News

Ascott announces 6 key appointments in global expansion move

The Straits Times Business News Company charting new growth trajectory towards achieving target of over $500 million in fee revenue by 2028. Go to Source 28/06/2024 - 00:55 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Build a conversational chatbot using different LLMs within single interface – Part 1

AWS Machine Learning Blog With the advent of generative artificial intelligence (AI), foundation models (FMs) can generate content such as answering questions, summarizing text, and providing highlights from the sourced document. However, for model selection, there is a wide choice from model providers, like Amazon, Anthropic, AI21 Labs, Cohere, and Meta, coupled with discrete real-world data formats in PDF, Word, text, CSV, image, audio, or video. Amazon Bedrock is a fully managed service that makes it straightforward to build and scale generative AI applications. Amazon Bedrock offers a choice of high-performing FMs from leading AI companies, including AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon, through a single API. It enables you to privately customize FMs with your data using techniques such as fine-tuning, prompt engineering, and Retrieval Augmented Generation (RAG), and build agents that run tasks using your enterprise systems and data sources while complying with security and privacy requirements. In this post, we show you a solution for building a single interface conversational chatbot that allows end-users to choose between different large language models (LLMs) and inference parameters for varied input data formats. The solution uses Amazon Bedrock to create choice and flexibility to improve the user experience and compare the model outputs from different options. The entire code base is available in GitHub, along with an AWS CloudFormation template. What is RAG Retrieval Augmented Generation (RAG) can enhance the generation process by using the benefits of retrieval, enabling a natural language generation model to produce more informed and contextually appropriate responses. By incorporating relevant information from retrieval into the generation process, RAG aims to improve the accuracy, coherence, and informativeness of the generated content. Implementing an effective RAG system requires several key components working in harmony: Foundation models – The foundation of a RAG architecture is...
Read More
Covid-19

CRA says legal action coming to recover COVID benefit overpayments

Starting in July, Canada Revenue Agency said it will begin issuing legal warnings and could start to take steps to recover overpayments of all COVID-19 programs Go to Source 27/06/2024 - 19:23 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Melissa Choi named director of MIT Lincoln Laboratory

MIT News - Artificial intelligence Melissa Choi has been named the next director of MIT Lincoln Laboratory, effective July 1. Currently assistant director of the laboratory, Choi succeeds Eric Evans, who will step down on June 30 after 18 years as director.Sharing the news in a letter to MIT faculty and staff today, Vice President for Research Ian Waitz noted Choi’s 25-year career of “outstanding technical and advisory leadership,” both at MIT and in service to the defense community.“Melissa has a marvelous technical breadth as well as excellent leadership and management skills, and she has presented a compelling strategic vision for the Laboratory,” Waitz wrote. “She is a thoughtful, intuitive leader who prioritizes communication, collaboration, mentoring, and professional development as foundations for an organizational culture that advances her vision for Lab-wide excellence in service to the nation.”Choi’s appointment marks a new chapter in Lincoln Laboratory’s storied history working to keep the nation safe and secure. As a federally funded research and development center operated by MIT for the Department of Defense, the laboratory has provided the government an independent perspective on critical science and technology issues of national interest for more than 70 years. Distinctive among national R&D labs, the laboratory specializes in both long-term system development and rapid demonstration of operational prototypes, to protect and defend the nation against advanced threats. In tandem with its role in developing technology for national security, the laboratory’s integral relationship with the MIT campus community enables impactful partnerships on fundamental research, teaching, and workforce development in critical science and technology areas.“In a time of great global instability and fast-evolving threats, the mission of Lincoln Laboratory has never been more important to the nation,” says MIT President Sally Kornbluth. “It is also vital that the laboratory apply government-funded, cutting-edge technologies to solve critical problems in fields...
Read More
Business News

Insurer Prudential to get Citi banker as new Singapore chief

The Straits Times Business News Ms Chan San San has been appointed to head Prudential’s Singapore business in about two months’ time. Go to Source 27/06/2024 - 15:30 / Twitter: @hoffeldtcom
Read More
Covid-19

LHSC reports $78-million deficit in 2023-24 fiscal year

LHSC is facing major budget pressures due to continued pandemic recovery among other factors, which resulted in $2 million more than projected in the deficit. Go to Source 26/06/2024 - 21:54 /Emily Passfield Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Automate derivative confirms processing using AWS AI services for the capital markets industry

AWS Machine Learning Blog Capital markets operation teams face numerous challenges throughout the post-trade lifecycle, including delays in trade settlements, booking errors, and inaccurate regulatory reporting. For derivative trades, it’s even more challenging. The timely settlement of derivative trades is an onerous task. This is because trades involve different counterparties and there is a high degree of variation among documents containing commercial terms (such as trade date, value date, and counterparties). We commonly see the application of screen scrapping solutions with OCR in capital market organizations. These applications come with the drawback of being inflexible and high-maintenance. Artificial intelligence and machine learning (AI/ML) technologies can assist capital market organizations overcome these challenges. Intelligent document processing (IDP) applies AI/ML techniques to automate data extraction from documents. Using IDP can reduce or eliminate the requirement for time-consuming human reviews. IDP has the power to transform the way capital market back-office operations work. It has the potential to boost employee efficiency, enhance cash flow by speeding up trade settlements, and minimize operational and regulatory risks. In this post, we show how you can automate and intelligently process derivative confirms at scale using AWS AI services. The solution combines Amazon Textract, a fully managed ML service to effortlessly extract text, handwriting, and data from scanned documents, and AWS Serverless technologies, a suite of fully managed event-driven services for running code, managing data, and integrating applications, all without managing servers. Solution overview The lifecycle of a derivative trade involves multiple phases, from trade research to execution, to clearing and settlement. The solution showcased in this post focuses on the trade clearing and settlement phase of the derivative trade lifecycle. During this phase, counterparties to the trade and their agents determine and verify the exact commercial terms of the transaction and prepare for settlement. The following...
Read More
Artificial Intelligence

AI-powered assistants for investment research with multi-modal data: An application of Agents for Amazon Bedrock

AWS Machine Learning Blog This post is a follow-up to Generative AI and multi-modal agents in AWS: The key to unlocking new value in financial markets. This blog is part of the series, Generative AI and AI/ML in Capital Markets and Financial Services. Financial analysts and research analysts in capital markets distill business insights from financial and non-financial data, such as public filings, earnings call recordings, market research publications, and economic reports, using a variety of tools for data mining. They face many challenges because of the increasing variety of tools and amount of data. They must synthesize massive amounts of data from multiple sources, qualitative and quantitative, to provide insights and recommendations. Analysts need to learn new tools and even some programming languages such as SQL (with different variations). To add to these challenges, they must think critically under time pressure and perform their tasks quickly to keep up with the pace of the market. Investment research is the cornerstone of successful investing, and involves gathering and analyzing relevant information about potential investment opportunities. Through thorough research, analysts come up with a hypothesis, test the hypothesis with data, and understand the effect before portfolio managers make decisions on investments as well as mitigate risks associated with their investments. Artificial intelligence (AI)-powered assistants can boost the productivity of a financial analysts, research analysts, and quantitative trading in capital markets by automating many of the tasks, freeing them to focus on high-value creative work. AI-powered assistants can amplify an analyst’s productivity by searching for relevant information in the customer’s own database as well as online, conducting qualitative and quantitative analysis on structured and unstructured data, enabling analysts to work faster and with greater accuracy. In this post, we introduce a solution using Agents for Amazon Bedrock and Knowledge Bases for Amazon...
Read More
Management

How to Implement Artificial Intelligence in Human Resources

15Five The AI furor might be long past, but the tools are here to stay. Tons of tools HR teams are already using have AI features built right in, so you might be using them without realizing it. But whether your team hasn’t implemented AI in their workflows yet or you want to get more out of what you’re already doing, here are a few things to consider. For one, there are four main types of artificial intelligence HR teams can work with: Generative AI tools use massive banks of data to create human-equivalent text, images, and other types of content. Machine learning platforms analyze data to pull out insights that can teach machines to perform tasks the way a human might. Natural language processing tools can review the text people write and draw conclusions like the sentiment or ideas it expresses. Predictive analytics is exactly what it sounds like. These tools use historical data to pick up on trends and help their users predict how these might change in the future. These AI tools can be used independently (like ChatGPT for generative AI) or be bundled into existing HR tools (like 15Five’s Spark for natural language processing). Now let’s cover everything else you need to know about using AI in HR processes. How is artificial intelligence used in human resources? AI can help streamline all sorts of processes, saving you time, improving employee experience, and freeing up HR resources for more important tasks. Here’s a quick list of ways HR teams can use AI tools in their everyday work. Candidate pre-screening When you’re getting dozens of applications across multiple job postings each week, you need a bit of a helping hand. Automated candidate pre-screening has already been in use by HR teams for years, and it’s exactly what it...
Read More
Artificial Intelligence

AI21 Labs Jamba-Instruct model is now available in Amazon Bedrock

AWS Machine Learning Blog We are excited to announce the availability of the Jamba-Instruct large language model (LLM) in Amazon Bedrock. Jamba-Instruct is built by AI21 Labs, and most notably supports a 256,000-token context window, making it especially useful for processing large documents and complex Retrieval Augmented Generation (RAG) applications. What is Jamba-Instruct Jamba-Instruct is an instruction-tuned version of the Jamba base model, previously open sourced by AI21 Labs, which combines a production grade-model, Structured State Space (SSM) technology, and Transformer architecture. With the SSM approach, Jamba-Instruct is able to achieve the largest context window length in its model size class while also delivering the performance traditional transformer-based models provide. These models yield a performance boost over AI21’s previous generation of models, the Jurassic-2 family of models. For more information about the hybrid SSM/Transformer architecture, refer to the Jamba: A Hybrid Transformer-Mamba Language Model whitepaper. Get started with Jamba-Instruct To get started with Jamba-Instruct models in Amazon Bedrock, first you need to get access to the model. On the Amazon Bedrock console, choose Model access in the navigation pane. Choose Modify model access. Select the AI21 Labs models you want to use and choose Next. Choose Submit to request model access. For more information, refer to Model access. Next, you can test the model either in the Amazon Bedrock Text or Chat playground. Example use cases for Jamba-Instruct Jamba-Instruct’s long context length is particularly well-suited for complex Retrieval Augmented Generation (RAG) workloads, or potentially complex document analysis. For example, it would be suitable for detecting contradictions between different documents or analyzing one document in the context of another. The following is an example prompt suitable for this use case: You are an expert research assistant; you are to note any contradictions between the first document and second document provided: Document...
Read More
1 4 5 6 7 8 45

The messages, the text and the photo is belonging to the one who sends out the RSS feed or related to the sender.

error: Content is protected !!