Blog

We collect the key news feed from free RSS services,  the news is updated every 3 hours, 24/7.

Business News

Apple’s services unit is now a $100 billion a year juggernaut after ‘phenomenal’ growth

US Top News and Analysis Apple's services business has become a critical part of the company's appeal to Wall Street over the past decade. Go to Source 01/11/2024 - 07:36 / Twitter: @hoffeldtcom
Read More
Business News

Amazon CEO pledges AI investments will pay off as capital expenditures surge 81%

US Top News and Analysis Amazon CEO Andy Jassy reassured shareholders that the company expects to make money on its generative AI investments. Go to Source 01/11/2024 - 07:36 / Twitter: @hoffeldtcom
Read More
Business News

Intel shares jump 7% on earnings beat, uplifting guidance

US Top News and Analysis Intel reported better-than-expected earnings following a quarter filled with challenges. Go to Source 01/11/2024 - 07:36 / Twitter: @hoffeldtcom
Read More
Business News

Olam exploring sale of remaining agribusiness stake to Saudis; shares jump 10%

The Straits Times Business News Deal said to value Olam Agri at about US$4 billion. Go to Source 01/11/2024 - 07:36 / Twitter: @hoffeldtcom
Read More
Business News

Ex-dealer who made $255k from illegally manipulating prices of SGX stocks gets 9 months’ jail

The Straits Times Business News It is the biggest reported case of stock market price spoofing detected in SGX. Go to Source 01/11/2024 - 07:36 / Twitter: @hoffeldtcom
Read More
Business News

China home sales see first monthly rise in 2024 on stimulus blitz

The Straits Times Business News Value of new home sales rose 7.1 per cent from a year earlier in October. Go to Source 01/11/2024 - 07:36 / Twitter: @hoffeldtcom
Read More
Psychology

Director’s Innovation Speaker Series: Youth-Centered Approaches to Media Research

NIMH News Feed During this lecture, Jenny Radesky, M.D., and Megan Moreno, M.D., M.S.Ed., M.P.H., will discuss youth-centered approaches to social media research and their impact on frameworks, methods, and products. Go to Source 01/11/2024 - 07:29 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Empower your generative AI application with a comprehensive custom observability solution

AWS Machine Learning Blog Recently, we’ve been witnessing the rapid development and evolution of generative AI applications, with observability and evaluation emerging as critical aspects for developers, data scientists, and stakeholders. Observability refers to the ability to understand the internal state and behavior of a system by analyzing its outputs, logs, and metrics. Evaluation, on the other hand, involves assessing the quality and relevance of the generated outputs, enabling continual improvement. Comprehensive observability and evaluation are essential for troubleshooting, identifying bottlenecks, optimizing applications, and providing relevant, high-quality responses. Observability empowers you to proactively monitor and analyze your generative AI applications, and evaluation helps you collect feedback, refine models, and enhance output quality. In the context of Amazon Bedrock, observability and evaluation become even more crucial. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. As the complexity and scale of these applications grow, providing comprehensive observability and robust evaluation mechanisms are essential for maintaining high performance, quality, and user satisfaction. We have built a custom observability solution that Amazon Bedrock users can quickly implement using just a few key building blocks and existing logs using FMs, Amazon Bedrock Knowledge Bases, Amazon Bedrock Guardrails, and Amazon Bedrock Agents. This solution uses decorators in your application code to capture and log metadata such as input prompts, output results, run time, and custom metadata, offering enhanced security, ease of use, flexibility, and integration with native AWS services. Notably, the solution supports comprehensive Retrieval Augmented Generation (RAG) evaluation so you can assess the quality and relevance of generated responses, identify areas for...
Read More
Artificial Intelligence

Automate Amazon Bedrock batch inference: Building a scalable and efficient pipeline

AWS Machine Learning Blog Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Batch inference in Amazon Bedrock efficiently processes large volumes of data using foundation models (FMs) when real-time results aren’t necessary. It’s ideal for workloads that aren’t latency sensitive, such as obtaining embeddings, entity extraction, FM-as-judge evaluations, and text categorization and summarization for business reporting tasks. A key advantage is its cost-effectiveness, with batch inference workloads charged at a 50% discount compared to On-Demand pricing. Refer to Supported Regions and models for batch inference for current supporting AWS Regions and models. Although batch inference offers numerous benefits, it’s limited to 10 batch inference jobs submitted per model per Region. To address this consideration and enhance your use of batch inference, we’ve developed a scalable solution using AWS Lambda and Amazon DynamoDB. This post guides you through implementing a queue management system that automatically monitors available job slots and submits new jobs as slots become available. We walk you through our solution, detailing the core logic of the Lambda functions. By the end, you’ll understand how to implement this solution so you can maximize the efficiency of your batch inference workflows on Amazon Bedrock. For instructions on how to start your Amazon Bedrock batch inference job, refer to Enhance call center efficiency using batch inference for transcript summarization with Amazon Bedrock. The power of batch inference Organizations can use batch inference to process large volumes of data asynchronously, making it ideal for scenarios where real-time results are not critical. This capability...
Read More
Artificial Intelligence

Build a video insights and summarization engine using generative AI with Amazon Bedrock

AWS Machine Learning Blog Professionals in a wide variety of industries have adopted digital video conferencing tools as part of their regular meetings with suppliers, colleagues, and customers. These meetings often involve exchanging information and discussing actions that one or more parties must take after the session. The traditional way to make sure information and actions aren’t forgotten is to take notes during the session; a manual and tedious process that can be error-prone, particularly in a high-activity or high-pressure scenario. Furthermore, these notes are usually personal and not stored in a central location, which is a lost opportunity for businesses to learn what does and doesn’t work, as well as how to improve their sales, purchasing, and communication processes. This post presents a solution where you can upload a recording of your meeting (a feature available in most modern digital communication services such as Amazon Chime) to a centralized video insights and summarization engine. This engine uses artificial intelligence (AI) and machine learning (ML) services and generative AI on AWS to extract transcripts, produce a summary, and provide a sentiment for the call. The solution notes the logged actions per individual and provides suggested actions for the uploader. All of this data is centralized and can be used to improve metrics in scenarios such as sales or call centers. Many commercial generative AI solutions available are expensive and require user-based licenses. In contrast, our solution is an open-source project powered by Amazon Bedrock, offering a cost-effective alternative without those limitations. This solution can help your organizations’ sales, sales engineering, and support functions become more efficient and customer-focused by reducing the need to take notes during customer calls. Use case overview The organization in this scenario has noticed that during customer calls, some actions often get skipped due to the...
Read More
Artificial Intelligence

Automate document processing with Amazon Bedrock Prompt Flows (preview)

AWS Machine Learning Blog Enterprises in industries like manufacturing, finance, and healthcare are inundated with a constant flow of documents—from financial reports and contracts to patient records and supply chain documents. Historically, processing and extracting insights from these unstructured data sources has been a manual, time-consuming, and error-prone task. However, the rise of intelligent document processing (IDP), which uses the power of artificial intelligence and machine learning (AI/ML) to automate the extraction, classification, and analysis of data from various document types is transforming the game. For manufacturers, this means streamlining processes like purchase order management, invoice processing, and supply chain documentation. Financial services firms can accelerate workflows around loan applications, account openings, and regulatory reporting. And in healthcare, IDP revolutionizes patient onboarding, claims processing, and medical record keeping. By integrating IDP into their operations, organizations across these key industries experience transformative benefits: increased efficiency and productivity through the reduction of manual data entry, improved accuracy and compliance by reducing human errors, enhanced customer experiences due to faster document processing, greater scalability to handle growing volumes of documents, and lower operational costs associated with document management. This post demonstrates how to build an IDP pipeline for automatically extracting and processing data from documents using Amazon Bedrock Prompt Flows, a fully managed service that enables you to build generative AI workflow using Amazon Bedrock and other services in an intuitive visual builder. Amazon Bedrock Prompt Flows allows you to quickly update your pipelines as your business changes, scaling your document processing workflows to help meet evolving demands. Solution overview To be scalable and cost-effective, this solution uses serverless technologies and managed services. In addition to Amazon Bedrock Prompt Flows, the solution uses the following services: Amazon Textract – Automatically extracts printed text, handwriting, and data from Amazon Simple Storage Service (Amazon S3)...
Read More
Artificial Intelligence

Governing the ML lifecycle at scale: Centralized observability with Amazon SageMaker and Amazon CloudWatch

AWS Machine Learning Blog This post is part of an ongoing series on governing the machine learning (ML) lifecycle at scale. To start from the beginning, refer to Governing the ML lifecycle at scale, Part 1: A framework for architecting ML workloads using Amazon SageMaker. A multi-account strategy is essential not only for improving governance but also for enhancing security and control over the resources that support your organization’s business. This approach enables various teams within your organization to experiment, innovate, and integrate more rapidly while keeping the production environment secure and available for your customers. However, because multiple teams might use your ML platform in the cloud, monitoring large ML workloads across a scaling multi-account environment presents challenges in setting up and monitoring telemetry data that is scattered across multiple accounts. In this post, we dive into setting up observability in a multi-account environment with Amazon SageMaker. Amazon SageMaker Model Monitor allows you to automatically monitor ML models in production, and alerts you when data and model quality issues appear. SageMaker Model Monitor emits per-feature metrics to Amazon CloudWatch, which you can use to set up dashboards and alerts. You can use cross-account observability in CloudWatch to search, analyze, and correlate cross-account telemetry data stored in CloudWatch such as metrics, logs, and traces from one centralized account. You can now set up a central observability AWS account and connect your other accounts as sources. Then you can search, audit, and analyze logs across your applications to drill down into operational issues in a matter of seconds. You can discover and visualize operational and model metrics from many accounts in a single place and create alarms that evaluate metrics belonging to other accounts. AWS CloudTrail is also essential for maintaining security and compliance in your AWS environment by providing a...
Read More
Psychology

New Hope for Rapid-Acting Depression Treatment

NIMH News Feed A new study, funded in part by the National Institute of Mental Health, showed that a new medication derived from ketamine is safe and acceptable for use in humans, setting the stage for clinical trials testing it for hard-to-treat mental disorders like severe depression. Go to Source 30/10/2024 - 16:55 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Amazon Bedrock Custom Model Import now generally available

AWS Machine Learning Blog Today, we’re pleased to announce the general availability (GA) of Amazon Bedrock Custom Model Import. This feature empowers customers to import and use their customized models alongside existing foundation models (FMs) through a single, unified API. Whether leveraging fine-tuned models like Meta Llama, Mistral Mixtral, and IBM Granite, or developing proprietary models based on popular open-source architectures, customers can now bring their custom models into Amazon Bedrock without the overhead of managing infrastructure or model lifecycle tasks. Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. Amazon Bedrock offers a serverless experience, so you can get started quickly, privately customize FMs with your own data, and integrate and deploy them into your applications using AWS tools without having to manage infrastructure. With Amazon Bedrock Custom Model Import, customers can access their imported custom models on demand in a serverless manner, freeing them from the complexities of deploying and scaling models themselves. They’re able to accelerate generative AI application development by using native Amazon Bedrock tools and features such as Knowledge Bases, Guardrails, Agents, and more—all through a unified and consistent developer experience. Benefits of Amazon Bedrock Custom Model Import include: Flexibility to use existing fine-tuned models:Customers can use their prior investments in model customization by importing existing customized models into Amazon Bedrock without the need to recreate or retrain them. This flexibility maximizes the value of previous efforts and accelerates application development. Integration with Amazon Bedrock Features: Imported custom models can be seamlessly integrated with the native tools and features of Amazon Bedrock, such...
Read More
Artificial Intelligence

Deploy a serverless web application to edit images using Amazon Bedrock

AWS Machine Learning Blog Generative AI adoption among various industries is revolutionizing different types of applications, including image editing. Image editing is used in various sectors, such as graphic designing, marketing, and social media. Users rely on specialized tools for editing images. Building a custom solution for this task can be complex. However, by using various AWS services, you can quickly deploy a serverless solution to edit images. This approach can give your teams access to image editing foundation models (FMs) using Amazon Bedrock. Amazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available through an API, so you can choose from a wide range of FMs to find the model that’s best suited for your use case. Amazon Bedrock is serverless, so you can get started quickly, privately customize FMs with your own data, and integrate and deploy them into your applications using AWS tools without having to manage infrastructure. Amazon Titan Image Generator G1 is an AI FM available with Amazon Bedrock that allows you to generate an image from text, or upload and edit your own image. Some of the key features we focus on include inpainting and outpainting. This post introduces a solution that simplifies the deployment of a web application for image editing using AWS serverless services. We use AWS Amplify, Amazon Cognito, Amazon API Gateway, AWS Lambda, and Amazon Bedrock with the Amazon Titan Image Generator G1 model to build an application to edit images using prompts. We cover the inner workings of the solution to help you understand the function of each service and how they are connected to give you a complete solution. At the time of writing this post, Amazon Titan Image Generator G1 comes in two versions; for this post, we use version...
Read More
Artificial Intelligence

Brilliant words, brilliant writing: Using AWS AI chips to quickly deploy Meta LLama 3-powered applications

AWS Machine Learning Blog Many organizations are building generative AI applications powered by large language models (LLMs) to boost productivity and build differentiated experiences. These LLMs are large and complex and deploying them requires powerful computing resources and results in high inference costs. For businesses and researchers with limited resources, the high inference costs of generative AI models can be a barrier to enter the market, so more efficient and cost-effective solutions are needed. Most generative AI use cases involve human interaction, which requires AI accelerators that can deliver real time response rates with low latency. At the same time, the pace of innovation in generative AI is increasing, and it’s becoming more challenging for developers and researchers to quickly evaluate and adopt new models to keep pace with the market. One of ways to get started with LLMs such as Llama and Mistral are by using Amazon Bedrock. However, customers who want to deploy LLMs in their own self-managed workflows for greater control and flexibility of underlying resources can use these LLMs optimized on top of AWS Inferentia2-powered Amazon Elastic Compute Cloud (Amazon EC2) Inf2 instances. In this blog post, we will introduce how to use an Amazon EC2 Inf2 instance to cost-effectively deploy multiple industry-leading LLMs on AWS Inferentia2, a purpose-built AWS AI chip, helping customers to quickly test and open up an API interface to facilitate performance benchmarking and downstream application calls at the same time. Model introduction There are many popular open source LLMs to choose from, and for this blog post, we will review three different use cases based on model expertise using Meta-Llama-3-8B-Instruct, Mistral-7B-instruct-v0.2, and CodeLlama-7b-instruct-hf. Model name Release company Number of parameters Release time Model capabilities Meta-Llama-3-8B-Instruct Meta 8 billion April 2024 Language understanding, translation, code generation, inference, chat Mistral-7B-Instruct-v0.2 Mistral AI 7.3...
Read More
Artificial Intelligence

Best practices for building robust generative AI applications with Amazon Bedrock Agents – Part 2

AWS Machine Learning Blog In Part 1 of this series, we explored best practices for creating accurate and reliable agents using Amazon Bedrock Agents. Amazon Bedrock Agents help you accelerate generative AI application development by orchestrating multistep tasks. Agents use the reasoning capability of foundation models (FMs) to create a plan that decomposes the problem into multiple steps. The model is augmented with the developer-provided instruction to create an orchestration plan and then carry out the plan. The agent can use company APIs and external knowledge through Retrieval Augmented Generation (RAG). In this second part, we dive into the architectural considerations and development lifecycle practices that can help you build robust, scalable, and secure intelligent agents. Whether you are just starting to explore the world of conversational AI or looking to optimize your existing agent deployments, this comprehensive guide can provide valuable long-term insights and practical tips to help you achieve your goals. Enable comprehensive logging and observability From the outset of your agent development journey, you should implement thorough logging and observability practices. This is crucial for debugging, auditing, and troubleshooting your agents. The first step to achieve comprehensive logging is to enable Amazon Bedrock model invocation logging to capture prompts and responses securely in your account. Amazon Bedrock Agents also provides you with traces, a detailed overview of the steps being orchestrated by the agents, the underlying prompts invoking the FM, the references being returned from the knowledge bases, and code being generated by the agent. Trace events are streamed in real time, which allows you to customize UX cues to keep the end-user informed about the progress of their request. You can log your agent’s traces and use them to track and troubleshoot your agents. When moving agent applications to production, it’s a best practice to set...
Read More
Artificial Intelligence

Train, optimize, and deploy models on edge devices using Amazon SageMaker and Qualcomm AI Hub

AWS Machine Learning Blog This post is co-written by Rodrigo Amaral, Ashwin Murthy and Meghan Stronach from Qualcomm. In this post, we introduce an innovative solution for end-to-end model customization and deployment at the edge using Amazon SageMaker and Qualcomm AI Hub. This seamless cloud-to-edge AI development experience will enable developers to create optimized, highly performant, and custom managed machine learning solutions where you can bring you own model (BYOM) and bring your own data (BYOD) to meet varied business requirements across industries. From real-time analytics and predictive maintenance to personalized customer experiences and autonomous systems, this approach caters to diverse needs. We demonstrate this solution by walking you through a comprehensive step-by-step guide on how to fine-tune YOLOv8, a real-time object detection model, on Amazon Web Services (AWS) using a custom dataset. The process uses a single ml.g5.2xlarge instance (providing one NVIDIA A10G Tensor Core GPU) with SageMaker for fine-tuning. After fine-tuning, we show you how to optimize the model with Qualcomm AI Hub so that it’s ready for deployment across edge devices powered by Snapdragon and Qualcomm platforms. Business challenge Today, many developers use AI and machine learning (ML) models to tackle a variety of business cases, from smart identification and natural language processing (NLP) to AI assistants. While open source models offer a good starting point, they often don’t meet the specific needs of the applications being developed. This is where model customization becomes essential, allowing developers to tailor models to their unique requirements and ensure optimal performance for specific use cases. In addition, on-device AI deployment is a game-changer for developers crafting use cases that demand immediacy, privacy, and reliability. By processing data locally, edge AI minimizes latency, ensures sensitive information stays on-device, and guarantees functionality even in poor connectivity. Developers are therefore looking for an end-to-end...
Read More
Psychology

Community Conversation Webinar Series: Is Your Kid Often Angry, Cranky, Irritable?

NIMH News Feed This webinar is designed for parents, caregivers, and educators who want to understand better and address the needs of children with disruptive mood dysregulation disorder. Go to Source 19/10/2024 - 07:44 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Psychology

Office for Disparities Research and Workforce Diversity Webinar Series: Mission-Driven & Equity-Minded Approaches to Graduate Admissions

NIMH News Feed During this webinar, experts in graduate education and systemic-change management will discuss evidence-based practices and case studies of successful holistic admissions programs. The webinar will provide faculty, admission officers, and other higher education professionals with a roadmap for implementing mission-driven systemic change in graduate admissions. Go to Source 19/10/2024 - 07:43 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Psychology

Around 1 in 4 U.S. adults suspect they have ADHD

PsycPORT™: Psychology Newswire Adults with undiagnosed ADHD should avoid self-diagnosis and ask their doctor about their symptoms. Go to Source 19/10/2024 - 07:43 / Twitter: @hoffeldtcom
Read More
Psychology

Adolescents treated for obesity with GLP-1 drugs had lower risk of suicidal thoughts

PsycPORT™: Psychology Newswire Kids who received the GLP-1 medications semaglutide or liraglutide were less likely to have suicidal thoughts or attempts than those treated with behavioral interventions. Go to Source 19/10/2024 - 07:43 / Twitter: @hoffeldtcom
Read More
Psychology

Why disasters like hurricanes unleash so much misinformation

PsycPORT™: Psychology Newswire Falsehoods spread when uncertainties—and emotions—are high after hurricanes. Go to Source 19/10/2024 - 07:43 / Twitter: @hoffeldtcom
Read More
Psychology

1 in 3 U.S. students experience racism at school: It's affecting mental health

PsycPORT™: Psychology Newswire Students who experienced racism said their mental health also deteriorated, a new study showed. Go to Source 19/10/2024 - 07:43 / Twitter: @hoffeldtcom
Read More
Psychology

Stigma keeps many men from seeking mental health support. These 3 shifts in thinking can help

PsycPORT™: Psychology Newswire In the U.S., only 40% of men with a reported mental illness received mental health care services in the past year, as compared to 52% of women with a reported mental illness. Go to Source 19/10/2024 - 07:43 / Twitter: @hoffeldtcom
Read More
Human Resources Management

Webinar: How to lead and manage narcissistic employees

Improve Your HR   T Tomorrow is the day! Join HRLearns as we host Brenda Neckvatal as she helps people Understand the traits of narcissism Recognizing manipulative behavior Developing strategies for dealing with a narcissistic employee How to provide clear direction and feedback Learn to avoid enabling narcissistic behavior This will be well worth your time to attend! Register at HRLearns. About our speaker: Brenda Neckvatal is an international award-winning HR professional and two time Best Selling Author. Not only does she help business leaders get the people side of their business right, she is a specialist in crisis management, government contracting HR compliance, and mentor to rising entrepreneurs, business leaders, HR champions and professionals. Brenda has been featured in Forbes, Fast Company, Inc, as well as US News and World Reports. She started as an HR sprout after a solid fourteen-year career in retail management. She really enjoys helping people solve their unique problems, and human resources offered her the ability to support her co-workers more. Having the benefit of working for six Fortune 500 companies, she converted her experience into advising her audience to use tried and trusted best practices that help small businesses achieve their workforce goals. In her combined 30-year career in human resources and business, she has consulted to over 500 small businesses and C-suite leaders. She has optimized employee effectiveness and helped mitigate the high costs associated with making hasty employment-related decisions. She has been involved with employee situations where they have engaged in workplace violence, a near stabbing, deliberately inciting fear in other coworkers, stalking women, breaches of protocol around national security,  assault, suicide, death, homicide, and a potential active shooter. Brenda is a devoted volunteer in the Navy SEAL Community and is constantly finding new ways of supporting veterans of Naval Special...
Read More
Psychology

Office for Disparities Research and Workforce Diversity Webinar Series: 2024 James S. Jackson Memorial Award Ceremony and Lecture

NIMH News Feed The National Institute of Mental Health (NIMH) is pleased to announce that Anna Lau, Ph.D., has been selected as the 2024 James S. Jackson Memorial Award winner. Join us on October 18 to attend her award ceremony and lecture. Go to Source 05/10/2024 - 15:24 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Psychology

Workers are getting anxiety from companies monitoring their work

PsycPORT™: Psychology Newswire Employees are becoming increasingly anxious as they note their companies are monitoring their work from behind the scenes. Go to Source 05/10/2024 - 15:24 / Twitter: @hoffeldtcom
Read More
Psychology

Could traveling keep you young?

PsycPORT™: Psychology Newswire Making new social connections, getting better sleep, and having new experiences—could help lower your risk of premature aging. Go to Source 05/10/2024 - 15:24 / Twitter: @hoffeldtcom
Read More
Psychology

LGBTQ+ adults may have greater risk of poor brain health, likely due to ‘minority stress’

PsycPORT™: Psychology Newswire The findings could help inform future studies that investigate increased risk of negative outcomes in LGBTQ+ subgroups to understand the specific challenges for each. Go to Source 05/10/2024 - 15:24 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Implement model-independent safety measures with Amazon Bedrock Guardrails

AWS Machine Learning Blog Generative AI models can produce information on a wide range of topics, but their application brings new challenges. These include maintaining relevance, avoiding toxic content, protecting sensitive information like personally identifiable information (PII), and mitigating hallucinations. Although foundation models (FMs) on Amazon Bedrock offer built-in protections, these are often model-specific and might not fully align with an organization’s use cases or responsible AI principles. As a result, developers frequently need to implement additional customized safety and privacy controls. This need becomes more pronounced when organizations use multiple FMs across different use cases, because maintaining consistent safeguards is crucial for accelerating development cycles and implementing a uniform approach to responsible AI. In April 2024, we announced the general availability of Amazon Bedrock Guardrails to help you introduce safeguards, prevent harmful content, and evaluate models against key safety criteria. With Amazon Bedrock Guardrails, you can implement safeguards in your generative AI applications that are customized to your use cases and responsible AI policies. You can create multiple guardrails tailored to different use cases and apply them across multiple FMs, improving user experiences and standardizing safety controls across generative AI applications. In addition, to enable safeguarding applications using different FMs, Amazon Bedrock Guardrails now supports the ApplyGuardrail API to evaluate user inputs and model responses for custom and third-party FMs available outside of Amazon Bedrock. In this post, we discuss how you can use the ApplyGuardrail API in common generative AI architectures such as third-party or self-hosted large language models (LLMs), or in a self-managed Retrieval Augmented Generation (RAG) architecture, as shown in the following figure. Solution overview For this post, we create a guardrail that stops our FM from providing fiduciary advice. The full list of configurations for the guardrail is available in the GitHub repo. You can...
Read More
Artificial Intelligence

How Schneider Electric uses Amazon Bedrock to identify high-potential business opportunities

AWS Machine Learning Blog This post was co-written with Anthony Medeiros, Manager of Solutions Engineering and Architecture for North America Artificial Intelligence, and Adrian Boeh, Senior Data Scientist – NAM AI, from Schneider Electric. Schneider Electric is a global leader in the digital transformation of energy management and automation. The company specializes in providing integrated solutions that make energy safe, reliable, efficient, and sustainable. Schneider Electric serves a wide range of industries, including smart manufacturing, resilient infrastructure, future-proof data centers, intelligent buildings, and intuitive homes. They offer products and services that encompass electrical distribution, industrial automation, and energy management. Their innovative technologies, extensive range of products, and commitment to sustainability position Schneider Electric as a key player in advancing smart and green solutions for the modern world. As demand for renewable energy continues to rise, Schneider Electric faces high demand for sustainable microgrid infrastructure. This demand comes in the form of requests for proposals (RFPs), each of which needs to be manually reviewed by a microgrid subject matter expert (SME) at Schneider. Manual review of each RFP was proving too costly and couldn’t be scaled to meet the industry needs. To solve the problem, Schneider turned to Amazon Bedrock and generative artificial intelligence (AI). Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. In this post, we show how the team at Schneider collaborated with the AWS Generative AI Innovation Center (GenAIIC) to build a generative AI solution on Amazon Bedrock to solve this problem. The solution processes and evaluates each RFP and then routes high-value RFPs...
Read More
Psychology

Self-compassion can reduce mental ill health by 80 percent

PsycPORT™: Psychology Newswire Refugees who practiced self-compassion were found to experience significantly less depression and anxiety than those who did not. Go to Source 05/10/2024 - 15:24 / Twitter: @hoffeldtcom
Read More
Covid-19

Everything you need to know about Covid this autumn – podcast

Coronavirus | The Guardian Madeleine Finlay is joined by Ian Sample, the Guardian’s science editor and Science Weekly co-host, to answer the questions we are all asking about Covid this autumn, from what is going on with the new variant XEC to how to get a vaccine and what scientists think the government should be doing differentlyCovid on the rise as experts say England has ‘capitulated’ to the virus Continue reading... Go to Source 05/10/2024 - 15:23 /Presented by Madeleine Finlay with Ian Sample, produced by Ellie Sans, sound design by Joel Cox, the executive producer is Ellie Bury Twitter: @hoffeldtcom
Read More
Psychology

Depression doesn’t have to ruin your sleep

PsycPORT™: Psychology Newswire Depressive symptoms can manifest in different ways. Often, they can interrupt your sleep quality. Go to Source 05/10/2024 - 15:23 / Twitter: @hoffeldtcom
Read More
Psychology

Disability, Equity, and Mental Health Research Webinar Series: Improving Mental Health Equity for Individuals with Neurodevelopmental Conditions: An Examination of Risk and Protective Factors and Potential Interventions

NIMH News Feed This webinar will discuss the latest research on factors that impact depression and suicidality in autistic people and how to use community-based methods to develop effective interventions. Go to Source 30/09/2024 - 15:40 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Psychology

Kentucky governor signs executive order banning conversion therapy

PsycPORT™: Psychology Newswire The scientifically discredited treatment, which aims to change an LGBTQ person's sexual orientation or gender identity, is now banned for minors in 24 states. Go to Source 23/09/2024 - 22:10 / Twitter: @hoffeldtcom
Read More
Psychology

Pregnancy changes the brain more than previously known

PsycPORT™: Psychology Newswire Certain brain regions may shrink in size during pregnancy yet improve in connectivity. Go to Source 23/09/2024 - 22:10 / Twitter: @hoffeldtcom
Read More
Psychology

Making arts and crafts improves your mental health as much as having a job

PsycPORT™: Psychology Newswire Engaging in creative activities can significantly boost well-being by providing meaningful spaces for expression and achievement. Go to Source 23/09/2024 - 22:10 / Twitter: @hoffeldtcom
Read More
Psychology

When parents are on their phones a lot, here’s what happens to their kids

PsycPORT™: Psychology Newswire Parents who stare at their screens instead of talking to their kids aren’t just modeling poor behavior — they could be hindering their children’s language development. Go to Source 23/09/2024 - 22:09 / Twitter: @hoffeldtcom
Read More
Psychology

The core ‘friend group’ is a myth—and it’s making us feel bad about ourselves

PsycPORT™: Psychology Newswire A tight crew that does everything together isn’t that common, and it’s unrealistic for many of us. Go to Source 23/09/2024 - 22:09 / Twitter: @hoffeldtcom
Read More
Business News

Japan’s export growth slows as external demand moderates, result driven by auto decline

The Straits Times Business News Semiconductor manufacturing equipment was one of the products supporting its export performance. Go to Source 18/09/2024 - 06:06 / Twitter: @hoffeldtcom
Read More
UK facing ‘tsunami of missed cancers’ in wake of pandemic, experts say
Covid-19

UK facing ‘tsunami of missed cancers’ in wake of pandemic, experts say

Coronavirus | The Guardian UK nations saw largest falls in diagnosis of lung, breast, colorectal and skin cancers in 2020, figures showThe UK can expect a “tsunami of missed cancers”, leading experts have said, after an international study found that diagnoses fell sharply during the pandemic.Preliminary figures from the International Cancer Benchmarking Partnership, presented to delegates at the World Cancer Congress in Geneva, compared data on the instance and stage of cancer diagnosis in Australia, Canada, Denmark, Ireland, New Zealand, Norway and the UK, before and during the pandemic. Continue reading... Go to Source 18/09/2024 - 06:00 /Anna Bawden in Geneva Twitter: @hoffeldtcom
Read More
Only half of Americans plan to get Covid or flu vaccinations this year – study
Covid-19

Only half of Americans plan to get Covid or flu vaccinations this year – study

Coronavirus | The Guardian Study also found that 37% who have gotten vaccines in the past do not plan on getting them this yearLess than half of Americans plan to get their Covid-19 vaccine this year, according to a new survey, and slightly more than half plan to get a flu shot.In a new report released on Thursday, the Ohio State University’s Wexner Medical Center found that 37% of Americans have gotten vaccines in the past but do not plan to this year. The same percentage of respondents said they do not need any of the vaccines surveyed in the poll, including those against the flu, Covid-19, pneumococcal and respiratory syncytial virus (RSV), the report stated. Continue reading... Go to Source 17/09/2024 - 20:03 /Maya Yang Twitter: @hoffeldtcom
Read More
Artificial Intelligence

CRISPR-Cas9 guide RNA efficiency prediction with efficiently tuned models in Amazon SageMaker

AWS Machine Learning Blog The clustered regularly interspaced short palindromic repeat (CRISPR) technology holds the promise to revolutionize gene editing technologies, which is transformative to the way we understand and treat diseases. This technique is based in a natural mechanism found in bacteria that allows a protein coupled to a single guide RNA (gRNA) strand to locate and make cuts in specific sites in the targeted genome. Being able to computationally predict the efficiency and specificity of gRNA is central to the success of gene editing. Transcribed from DNA sequences, RNA is an important type of biological sequence of ribonucleotides (A, U, G, C), which folds into 3D structure. Benefiting from recent advance in large language models (LLMs), a variety of computational biology tasks can be solved by fine-tuning biological LLMs pre-trained on billions of known biological sequences. The downstream tasks on RNAs are relatively understudied. In this post, we adopt a pre-trained genomic LLMs for gRNA efficiency prediction. The idea is to treat a computer designed gRNA as a sentence, and fine-tune the LLM to perform sentence-level regression tasks analogous to sentiment analysis. We used Parameter-Efficient Fine-Tuning methods to reduce the number of parameters and GPU usage for this task. Solution overview Large language models (LLMs) have gained a lot of interest for their ability to encode syntax and semantics of natural languages. The neural architecture behind LLMs are transformers, which are comprised of attention-based encoder-decoder blocks that generate an internal representation of the data they are trained from (encoder) and are able to generate sequences in the same latent space that resemble the original data (decoder). Due to their success in natural language, recent works have explored the use of LLMs for molecular biology information, which is sequential in nature. DNABERT is a pre-trained transformer model with non-overlapping...
Read More
Artificial Intelligence

Improve RAG performance using Cohere Rerank

AWS Machine Learning Blog This post is co-written with Pradeep Prabhakaran from Cohere. Retrieval Augmented Generation (RAG) is a powerful technique that can help enterprises develop generative artificial intelligence (AI) apps that integrate real-time data and enable rich, interactive conversations using proprietary data. RAG allows these AI applications to tap into external, reliable sources of domain-specific knowledge, enriching the context for the language model as it answers user queries. However, the reliability and accuracy of the responses hinges on finding the right source materials. Therefore, honing the search process in RAG is crucial to boosting the trustworthiness of the generated responses. RAG systems are important tools for building search and retrieval systems, but they often fall short of expectations due to suboptimal retrieval steps. This can be enhanced using a rerank step to improve search quality. RAG is an approach that combines information retrieval techniques with natural language processing (NLP) to enhance the performance of text generation or language modeling tasks. This method involves retrieving relevant information from a large corpus of text data and using it to augment the generation process. The key idea is to incorporate external knowledge or context into the model to improve the accuracy, diversity, and relevance of the generated responses. Workflow of RAG Orchestration The RAG orchestration generally consists of two steps: Retrieval – RAG fetches relevant documents from an external data source using the generated search queries. When presented with the search queries, the RAG-based application searches the data source for relevant documents or passages. Grounded generation – Using the retrieved documents or passages, the generation model creates educated answers with inline citations using the fetched documents. The following diagram shows the RAG workflow. Document retrieval in RAG orchestration One technique for retrieving documents in a RAG orchestration is dense retrieval, which is...
Read More
Mother forced to wear PPE while newborn son died in her arms, Covid inquiry hears
Covid-19

Mother forced to wear PPE while newborn son died in her arms, Covid inquiry hears

Coronavirus | The Guardian Catherine Todd tells inquiry of losing son in hospital hours after his birth during pandemic after she contracted CovidA bereaved mother was forced to wear full PPE as her baby son died in her arms hours after his birth, the UK Covid-19 inquiry has heard. Catherine Todd’s son Ziggy was born during the pandemic on 21 July 2021 at the Ulster hospital in Northern Ireland.Chaired by Heather Hallett, the inquiry is now investigating the impact of the pandemic on healthcare systems across the UK. Continue reading... Go to Source 17/09/2024 - 20:03 /Tom Ambrose and agency Twitter: @hoffeldtcom
Read More
Psychology

Office for Disparities Research and Workforce Diversity Webinar Series: Mechanisms of Risk and Resilience for Mental Health in Individuals of Mexican Origin

NIMH News Feed In this webinar, presenters will discuss risk factors that Mexican-origin individuals may face, including discrimination and acculturation stress. They will also discuss research that examines how factors such as familism, ethnic pride, and temperament can help promote resilience among people of Mexican origin. Go to Source 17/09/2024 - 20:02 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Human Resources Management

Why I’m Retiring My ‘Evil HR Lady’ Brand After 18 Years–and What’s Next

Improve Your HR I’ve worked under the “Evil HR Lady” brand since 2006. On Tuesday, I changed it to “Improve Your HR.“ Evil HR Lady was a solid brand name with good recognition. I still think it’s funny, and I’m very glad I chose it when I started blogging about HR back in the days of Blogspot. But it was time for a change. Here are five reasons I decided to change my brand name. New logo designed by Charlie Lythgoe C5 Designs My purpose (and audience) changed “Evil HR Lady” and “Improve Your HR” are both about HR, but my approach to HR has changed over the years. When I launched my blog, my goal was to write an advice column that would explain human resources to non-HR people. I wanted to give regular employees (who often see HR as “evil”) an inside view. And I continue to do that! But about 90 percent of my work is now focused on two groups: To keep reading, click here: Why I’m Retiring My ‘Evil HR Lady’ Brand After 18 Years–and What’s Next The post Why I’m Retiring My ‘Evil HR Lady’ Brand After 18 Years–and What’s Next appeared first on Improve Your HR. Go to Source 11/09/2024 - 16:19 /Evil HR Lady Twitter: @hoffeldtcom
Read More
Psychology

75th Anniversary Symposium: Inspiration and Aspiration: Future Perspectives in Mental Health Research

NIMH News Feed NIMH's final 75th Anniversary symposium brings together trailblazers in the scientific community to discuss diverse and visionary perspectives on the future of mental health research. Go to Source 10/09/2024 - 06:04 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Psychology

Webinar: Timely and Adaptive Strategies to Optimize Suicide Prevention Interventions

NIMH News Feed This webinar brings together experts in areas that include passive and active data collection methods, the measurement of social contexts related to suicide risk, and the methodological aspects of Just-in-Time Adaptive Interventions (JITAI). Go to Source 10/09/2024 - 06:03 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Psychology

NIH Women’s Health Roundtable: Maternal Mental Health Research

NIMH News Feed This is the third roundtable of NIH's Women's Health Roundtable Series, which focuses on important women's health topics, such as maternal mental health, as part of the White House Women's Health Research Initiative. The roundtable is also featured in NIMH's Office of Disparities Research and Workforce Diversity Webinar Series, which focuses on mental health equity research topics. Go to Source 10/09/2024 - 06:03 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Psychology

Webinar: Neuroimmune Mechanisms Linking Inflammatory Processes with Cognitive, Social, and Affective Functions

NIMH News Feed This webinar focuses on the complex communication between the nervous and immune systems and how this interaction impacts brain function during development, adulthood, and disease. Go to Source 10/09/2024 - 06:03 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Psychology

White House announces rule that would cut insurance red tape over mental health and substance use disorder care

PsycPORT™: Psychology Newswire A new rule says mental health and substance use disorder care on private insurance plans should be covered at the same level as physical health benefits. Go to Source 10/09/2024 - 06:03 / Twitter: @hoffeldtcom
Read More
Psychology

Parents nationwide wrestle with fear of school shootings

PsycPORT™: Psychology Newswire How to cope with feelings of anxiety, fear, and helplessness, is something every parent approaches differently. Go to Source 10/09/2024 - 06:03 / Twitter: @hoffeldtcom
Read More
Covid lockdowns prematurely aged girls’ brains more than boys’, study finds
Covid-19

Covid lockdowns prematurely aged girls’ brains more than boys’, study finds

Coronavirus | The Guardian MRI scans found girls’ brains appeared 4.2 years older than expected after lockdowns, compared with 1.4 years for boysAdolescent girls who lived through Covid lockdowns experienced more rapid brain ageing than boys, according to data that suggests the social restrictions had a disproportionate impact on them.MRI scans found evidence of premature brain ageing in both boys and girls, but girls’ brains appeared on average 4.2 years older than expected after lockdowns, compared with 1.4 years older for boys. Continue reading... Go to Source 10/09/2024 - 06:03 /Ian Sample Science editor Twitter: @hoffeldtcom
Read More
Psychology

Feeling hot and sweaty can disrupt your sleep

PsycPORT™: Psychology Newswire There are benefits to sleeping with a cooling blanket. Go to Source 10/09/2024 - 06:02 / Twitter: @hoffeldtcom
Read More
Psychology

'Next-level helicopter parents' are tracking college students, stunting their development, say experts

PsycPORT™: Psychology Newswire Parents who are 'helicoptering' over their college-age kids are doing them a 'disservice.' Go to Source 10/09/2024 - 06:02 / Twitter: @hoffeldtcom
Read More
Impact of Covid lockdowns to disrupt England’s schools into the 2030s, report says
Covid-19

Impact of Covid lockdowns to disrupt England’s schools into the 2030s, report says

Coronavirus | The Guardian Analysis from the Association for School and College Leaders warns extensive problems with learning, behaviour and absence to comeRepairing the damage to children’s education caused by the pandemic lockdowns and closures will disrupt England’s schools until the mid-2030s, according to a new report.The analysis, published by the Association for School and College Leaders (ASCL), forecasted that the after-effects of the pandemic will hit schools in a series of waves, with different age groups requiring varying solutions for their problems with learning, behaviour and absence. Continue reading... Go to Source 10/09/2024 - 06:02 /Richard Adams Education editor Twitter: @hoffeldtcom
Read More
Psychology

This metabolic brain boost revives memory in Alzheimer’s mice

PsycPORT™: Psychology Newswire An experimental cancer drug appeared to re-energize the brains of mice that had a form of Alzheimer’s — and even restore their ability to learn and remember. Go to Source 10/09/2024 - 06:02 / Twitter: @hoffeldtcom
Read More
Business News

Malaysia central banker sees rate hold in 2024 with growth at 5%

The Straits Times Business News Inflation won’t exceed 3 per cent, according to Bank Negara Malaysia’s deputy governor. Go to Source 10/09/2024 - 06:02 / Twitter: @hoffeldtcom
Read More
Psychology

Livestream Event: Suicide Prevention in Health Care Settings

NIMH News Feed In recognition of National Suicide Prevention Month in September, the National Institute of Mental Health (NIMH) and the Substance Abuse and Mental Health Services Administration (SAMHSA) are hosting a livestream event on suicide prevention in health care settings. Go to Source 31/08/2024 - 00:50 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Psychology

Office for Disparities Research and Workforce Diversity Webinar Series: Cultural Strengths as Protection: Multimodal Findings Using a Community-Engaged Process

NIMH News Feed This webinar will present a conceptual framework for investigating the impact of cultural factors on mental health within American Indian communities. It will also present emerging findings from community-engaged research in this field. Go to Source 31/08/2024 - 00:50 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Psychology

Affirmations can seem cringe. Should you do them anyway?

PsycPORT™: Psychology Newswire Affirmations can change our behaviors and feelings because they’re a form of positive reinforcement. Go to Source 31/08/2024 - 00:50 / Twitter: @hoffeldtcom
Read More
Psychology

What is ketamine and is it effective?

PsycPORT™: Psychology Newswire How ketamine therapy is used to treat depression, what side effects it can have and more. Go to Source 31/08/2024 - 00:50 / Twitter: @hoffeldtcom
Read More
Psychology

How to cultivate the ‘erotic thread’ that helps you stay connected to your romantic partner

PsycPORT™: Psychology Newswire Moments of feeling desired are key to many people’s sexual fantasies, research suggests, and touch can be crucial for couples to maintain a connection. Go to Source 31/08/2024 - 00:48 / Twitter: @hoffeldtcom
Read More
Psychology

Cellphone bans in some states' public schools take effect as experts point out pros and cons

PsycPORT™: Psychology Newswire Arizona, California and Virginia are some states taking action on student cellphone use during the school day. Go to Source 31/08/2024 - 00:48 / Twitter: @hoffeldtcom
Read More
Psychology

What it’s like to have seasonal depression during summer

PsycPORT™: Psychology Newswire For some people longer, sunnier days cause summertime SAD and they find themselves hiding and feeling down. Go to Source 31/08/2024 - 00:48 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Accelerate Generative AI Inference with NVIDIA NIM Microservices on Amazon SageMaker

AWS Machine Learning Blog This post is co-written with Eliuth Triana, Abhishek Sawarkar, Jiahong Liu, Kshitiz Gupta, JR Morgan and Deepika Padmanabhan from NVIDIA.  At the 2024 NVIDIA GTC conference, we announced support for NVIDIA NIM Inference Microservices in Amazon SageMaker Inference. This integration allows you to deploy industry-leading large language models (LLMs) on SageMaker and optimize their performance and cost. The optimized prebuilt containers enable the deployment of state-of-the-art LLMs in minutes instead of days, facilitating their seamless integration into enterprise-grade AI applications. NIM is built on technologies like NVIDIA TensorRT, NVIDIA TensorRT-LLM, and vLLM. NIM is engineered to enable straightforward, secure, and performant AI inferencing on NVIDIA GPU-accelerated instances hosted by SageMaker. This allows developers to take advantage of the power of these advanced models using SageMaker APIs and just a few lines of code, accelerating the deployment of cutting-edge AI capabilities within their applications. NIM, part of the NVIDIA AI Enterprise software platform listed on AWS Marketplace, is a set of inference microservices that bring the power of state-of-the-art LLMs to your applications, providing natural language processing (NLP) and understanding capabilities, whether you’re developing chatbots, summarizing documents, or implementing other NLP-powered applications. You can use pre-built NVIDIA containers to host popular LLMs that are optimized for specific NVIDIA GPUs for quick deployment. Companies like Amgen, A-Alpha Bio, Agilent, and Hippocratic AI are among those using NVIDIA AI on AWS to accelerate computational biology, genomics analysis, and conversational AI. In this post, we provide a walkthrough of how customers can use generative artificial intelligence (AI) models and LLMs using NVIDIA NIM integration with SageMaker. We demonstrate how this integration works and how you can deploy these state-of-the-art models on SageMaker, optimizing their performance and cost. You can use the optimized pre-built NIM containers to deploy LLMs and integrate them...
Read More
Artificial Intelligence

Celebrating the final AWS DeepRacer League championship and road ahead

AWS Machine Learning Blog The AWS DeepRacer League is the world’s first autonomous racing league, open to everyone and powered by machine learning (ML). AWS DeepRacer brings builders together from around the world, creating a community where you learn ML hands-on through friendly autonomous racing competitions. As we celebrate the achievements of over 560,000 participants from more than 150 countries who sharpened their skills through the AWS DeepRacer League over the last 6 years, we also prepare to close this chapter with a final season that serves as both a victory lap and a launching point for what’s next in the world of AWS DeepRacer. The legacy of AWS DeepRacer The AWS DeepRacer community is the heartbeat of the league, where enthusiasts and league legends help foster learning for a global network of AWS DeepRacer participants at any stage of their ML journey. When we launched AWS DeepRacer in 2018, we set out to make ML model training concepts more accessible. By removing common hurdles associated with the preparation of training and evaluating ML models, AWS DeepRacer gives builders a fun way to focus on fundamental training, evaluation, and model performance concepts, all without any prior experience. The impact of racing in the league goes far beyond the podium and prizes, with many participants using their AWS DeepRacer experience and community support to advance their careers. “Embracing the challenges of AWS DeepRacer has not only sharpened my technical skills but has also opened doors to new roles, where innovation and agility are key. Every lap on the track is a step closer to mastering the tools that drive modern solutions, making me ready for the future of technology.” – AWS DeepRacer League veteran Daryl Jezierski, Lead Site Reliability Engineer at The Walt Disney Company. Each year, hundreds of AWS customers...
Read More
Artificial Intelligence

Provide a personalized experience for news readers using Amazon Personalize and Amazon Titan Text Embeddings on Amazon Bedrock

AWS Machine Learning Blog News publishers want to provide a personalized and informative experience to their readers, but the short shelf life of news articles can make this quite difficult. In news publishing, articles typically have peak readership within the same day of publication. Additionally, news publishers frequently publish new articles and want to show these articles to interested readers as quickly as possible. This poses challenges for interaction-based recommender system methodologies such as collaborative filtering and the deep learning-based approaches used in Amazon Personalize, a managed service that can learn user preferences from their past behavior and quickly adjust recommendations to account for changing user behavior in near real time. News publishers typically don’t have the budget or the staff to experiment with in-house algorithms, and need a fully managed solution. In this post, we demonstrate how to provide high-quality recommendations for articles with short shelf lives by using text embeddings in Amazon Bedrock. Amazon Bedrock a fully managed service that offers a choice of high-performing foundation models (FMs) from leading artificial intelligence (AI) companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. Embeddings are a mathematical representation of a piece of information such as a text or an image. Specifically, they are a vector or ordered list of numbers. This representation helps capture the meaning of the image or text in such a way that you can use it to determine how similar images or text are to each other by taking their distance from each other in the embedding space. For our post, we use the Amazon Titan Text Embeddings model. Solution overview By combining the benefits of Amazon Titan Text...
Read More
Artificial Intelligence

Secure RAG applications using prompt engineering on Amazon Bedrock

AWS Machine Learning Blog The proliferation of large language models (LLMs) in enterprise IT environments presents new challenges and opportunities in security, responsible artificial intelligence (AI), privacy, and prompt engineering. The risks associated with LLM use, such as biased outputs, privacy breaches, and security vulnerabilities, must be mitigated. To address these challenges, organizations must proactively ensure that their use of LLMs aligns with the broader principles of responsible AI and that they prioritize security and privacy. When organizations work with LLMs, they should define objectives and implement measures to enhance the security of their LLM deployments, as they do with applicable regulatory compliance. This involves deploying robust authentication mechanisms, encryption protocols, and optimized prompt designs to identify and counteract prompt injection, prompt leaking, and jailbreaking attempts, which can help increase the reliability of AI-generated outputs as it pertains to security. In this post, we discuss existing prompt-level threats and outline several security guardrails for mitigating prompt-level threats. For our example, we work with Anthropic Claude on Amazon Bedrock, implementing prompt templates that allow us to enforce guardrails against common security threats such as prompt injection. These templates are compatible with and can be modified for other LLMs. Introduction to LLMs and Retrieval Augmented Generation LLMs are trained on an unprecedented scale, with some of the largest models comprising billions of parameters and ingesting terabytes of textual data from diverse sources. This massive scale allows LLMs to develop a rich and nuanced understanding of language, capturing subtle nuances, idioms, and contextual cues that were previously challenging for AI systems. To use these models, we can turn to services such as Amazon Bedrock, which provides access to a variety of foundation models from Amazon and third-party providers including Anthropic, Cohere, Meta, and others. You can use Amazon Bedrock to experiment with state-of-the-art...
Read More
Artificial Intelligence

Get the most from Amazon Titan Text Premier

AWS Machine Learning Blog Amazon Titan Text Premier, the latest addition to the Amazon Titan family of large language models (LLMs), is now generally available in Amazon Bedrock. Amazon Titan Text Premier is an advanced, high performance, and cost-effective LLM engineered to deliver superior performance for enterprise-grade text generation applications, including optimized performance for Retrieval Augmented Generation (RAG) and agents. The model is built from the ground up following safe, secure, and trustworthy responsible AI practices and excels in delivering exceptional generative artificial intelligence (AI) text capabilities at scale. Exclusive to Amazon Bedrock, Amazon Titan Text Premier supports a wide range of text-related tasks, including summarization, text generation, classification, question-answering, and information extraction. This new model offers optimized performance for key features such as RAG on Knowledge Bases for Amazon Bedrock and function calling on Agents for Amazon Bedrock. Such integrations enable advanced applications like building interactive AI assistants that use your APIs and interact with your documents. Why choose Amazon Titan Text Premier? As of today, the Amazon Titan family of models for text generation allows for context windows from 4K to 32K and a rich set of capabilities around free text and code generation, API orchestration, RAG, and Agent based applications. An overview of these Amazon Titan models is shown in the following table. Model Availability Context window Languages Functionality Customized fine-tuning Amazon Titan Text Lite GA 4K English Code, rich text Yes Amazon Titan Text Express GA (English) 8K Multilingual (100+ languages) Code, rich text, API orchestration Yes Amazon Titan Text Premier GA 32K English Enterprise text generation applications, RAG, agents Yes (preview) Amazon Titan Text Premier is an LLM designed for enterprise-grade applications. It is optimized for performance and cost-effectiveness, with a maximum context length of 32,000 tokens. Amazon Titan Text Premier enables the development of...
Read More
Artificial Intelligence

GenASL: Generative AI-powered American Sign Language avatars

AWS Machine Learning Blog In today’s world, effective communication is essential for fostering inclusivity and breaking down barriers. However, for individuals who rely on visual communication methods like American Sign Language (ASL), traditional communication tools often fall short. That’s where GenASL comes in. GenASL is a generative artificial intelligence (AI)-powered solution that translates speech or text into expressive ASL avatar animations, bridging the gap between spoken and written language and sign language. The rise of foundation models (FMs), and the fascinating world of generative AI that we live in, is incredibly exciting and opens doors to imagine and build what wasn’t previously possible. AWS makes it possible for organizations of all sizes and developers of all skill levels to build and scale generative AI applications with security, privacy, and responsible AI. In this post, we dive into the architecture and implementation details of GenASL, which uses AWS generative AI capabilities to create human-like ASL avatar videos. Solution overview The GenASL solution comprises several AWS services working together to enable seamless translation from speech or text to ASL avatar animations. Users can input audio, video, or text into GenASL, which generates an ASL avatar video that interprets the provided data. The solution uses AWS AI and machine learning (AI/ML) services, including Amazon Transcribe, Amazon SageMaker, Amazon Bedrock, and FMs. The following diagram shows a high-level overview of the architecture. The workflow includes the following steps: An Amazon Elastic Compute Cloud (Amazon EC2) instance initiates a batch process to create ASL avatars from a video dataset consisting of over 8,000 poses using RTMPose, a real-time multi-person pose estimation toolkit based on MMPose. AWS Amplify distributes the GenASL web app consisting of HTML, JavaScript, and CSS to users’ mobile devices. An Amazon Cognito identity pool grants temporary access to the Amazon Simple Storage...
Read More
Artificial Intelligence

AWS empowers sales teams using generative AI solution built on Amazon Bedrock

AWS Machine Learning Blog At AWS, we are transforming our seller and customer journeys by using generative artificial intelligence (AI) across the sales lifecycle. We envision a future where AI seamlessly integrates into our teams’ workflows, automating repetitive tasks, providing intelligent recommendations, and freeing up time for more strategic, high-value interactions. Our field organization includes customer-facing teams (account managers, solutions architects, specialists) and internal support functions (sales operations). Prospecting, opportunity progression, and customer engagement present exciting opportunities to utilize generative AI, using historical data, to drive efficiency and effectiveness. Personalized content will be generated at every step, and collaboration within account teams will be seamless with a complete, up-to-date view of the customer. Our internal AI sales assistant, powered by Amazon Q Business, will be available across every modality and seamlessly integrate with systems such as internal knowledge bases, customer relationship management (CRM), and more. It will be able to answer questions, generate content, and facilitate bidirectional interactions, all while continuously using internal AWS and external data to deliver timely, personalized insights. Through this series of posts, we share our generative AI journey and use cases, detailing the architecture, AWS services used, lessons learned, and the impact of these solutions on our teams and customers. In this first post, we explore Account Summaries, one of our initial production use cases built on Amazon Bedrock. Account Summaries equips our teams to be better prepared for customer engagements. It combines information from various sources into comprehensive, on-demand summaries available in our CRM or proactively delivered based on upcoming meetings. From the period of September 2023 to March 2024, sellers leveraging GenAI Account Summaries saw a 4.9% increase in value of opportunities created. The business opportunity Data often resides across multiple internal systems, such as CRM and financial tools, and external sources, making...
Read More
Psychology

Office for Disparities Research and Workforce Diversity’s Disability, Equity, and Mental Health Research Webinar Series: Framework for Understanding Structural Ableism in Health Care

NIMH News Feed In this webinar, Dielle Lundberg, M.P.H., and Jessica Chen, Ph.D., will introduce a conceptual framework outlining pathways through which structural ableism in public health and health care may contribute to health inequities for “people who are disabled, neurodivergent, chronically ill, mad, and/or living with mental illness” (Lundberg & Chen, 2023). Go to Source 27/08/2024 - 06:29 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Build private and secure enterprise generative AI applications with Amazon Q Business using IAM Federation

AWS Machine Learning Blog Amazon Q Business is a conversational assistant powered by generative artificial intelligence (AI) that enhances workforce productivity by answering questions and completing tasks based on information in your enterprise systems, which each user is authorized to access. In an earlier post, we discussed how you can build private and secure enterprise generative AI applications with Amazon Q Business and AWS IAM Identity Center. If you want to use Amazon Q Business to build enterprise generative AI applications, and have yet to adopt organization-wide use of AWS IAM Identity Center, you can use Amazon Q Business IAM Federation to directly manage user access to Amazon Q Business applications from your enterprise identity provider (IdP), such as Okta or Ping Identity. Amazon Q Business IAM Federation uses Federation with IAM and doesn’t require the use of IAM Identity Center. AWS recommends using AWS Identity Center if you have a large number of users in order to achieve a seamless user access management experience for multiple Amazon Q Business applications across many AWS accounts in AWS Organizations. You can use federated groups to define access control, and a user is charged only one time for their highest tier of Amazon Q Business subscription. Although Amazon Q Business IAM Federation enables you to build private and secure generative AI applications, without requiring the use of IAM Identity Center, it is relatively constrained with no support for federated groups, and limits the ability to charge a user only one time for their highest tier of Amazon Q Business subscription to Amazon Q Business applications sharing SAML identity provider or OIDC identity provider in a single AWS account. This post shows how you can use Amazon Q Business IAM Federation for user access management of your Amazon Q Business applications. Solution overview...
Read More
Your Self-Story Is a Lie
Psychology

Your Self-Story Is a Lie

Psychology Today: The Latest The stories we tell ourselves aren't entirely true—but that doesn't make them harmful. Go to Source 20/08/2024 - 08:55 /Ross Gormley Twitter: @hoffeldtcom
Read More
Dating Apps Steer You in the Wrong Direction
Psychology

Dating Apps Steer You in the Wrong Direction

Psychology Today: The Latest Personal Perspective: Are you "swiping left" on all the best people? Here's why you might be missing out on meeting more quality partners. Go to Source 20/08/2024 - 08:55 /Lise Deguire Psy.D. Twitter: @hoffeldtcom
Read More
Recent Research Encourages Therapists to Talk About Consent
Psychology

Recent Research Encourages Therapists to Talk About Consent

Psychology Today: The Latest With choking on the rise as a sexual trend with young people, how can therapists teach clients how to verbalize what they desire and remain safe in their intimate relationships? Go to Source 20/08/2024 - 08:55 /Sari Cooper, CST, LCSW Twitter: @hoffeldtcom
Read More
How to Deal With Loneliness When You’re Single
Psychology

How to Deal With Loneliness When You’re Single

Psychology Today: The Latest Personal Perspective: Love and relationships are only one part of your life, not your entire life. Go to Source 20/08/2024 - 08:54 /John Kim LMFT Twitter: @hoffeldtcom
Read More
13 Ways to Be the Best Man You Can Be
Psychology

13 Ways to Be the Best Man You Can Be

Psychology Today: The Latest Masculinity does not have to be toxic. Here are some valuable guidelines, collected from male therapy clients over the years, about how to do it right. Go to Source 20/08/2024 - 08:54 /David B. Wexler Ph.D. Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Cohere Rerank 3 Nimble now generally available on Amazon SageMaker JumpStart

AWS Machine Learning Blog The Cohere Rerank 3 Nimble foundation model (FM) is now generally available in Amazon SageMaker JumpStart. This model is the newest FM in Cohere’s Rerank model series, built to enhance enterprise search and Retrieval Augmented Generation (RAG) systems. In this post, we discuss the benefits and capabilities of this new model with some examples. Overview of Cohere Rerank models Cohere’s Rerank family of models are designed to enhance existing enterprise search systems and RAG systems. Rerank models improve search accuracy over both keyword-based and embedding-based search systems. Cohere Rerank 3 is designed to reorder documents retrieved by initial search algorithms based on their relevance to a given query. A reranking model, also known as a cross-encoder, is a type of model that, given a query and document pair, will output a similarity score. For FMs, words, sentences, or entire documents are often encoded as dense vectors in a semantic space. By calculating the cosine of the angle between these vectors, you can quantify their semantic similarity and output as a single similarity score. You can use this score to reorder the documents by relevance to your query. Cohere Rerank 3 Nimble is the newest model from Cohere’s Rerank family of models, designed to improve speed and efficiency from its predecessor Cohere Rerank 3. According to Cohere’s benchmark tests including BEIR (Benchmarking IR) for accuracy and internal benchmarking datasets, Cohere Rerank 3 Nimble maintains high accuracy while being approximately 3–5 times faster than Cohere Rerank 3. The speed improvement is designed for enterprises looking to enhance their search capabilities without sacrificing performance. The following diagram represents the two-stage retrieval of a RAG pipeline and illustrates where Cohere Rerank 3 Nimble is incorporated into the search pipeline. In the first stage of retrieval in the RAG architecture, a...
Read More
Psychology

Information Session: NIMH Intramural Research Program Training Opportunities (August)

NIMH News Feed Undergraduates, graduate students, medical students, and postdoctoral fellows are invited to learn about training opportunities available in the NIMH Intramural Research Program. Go to Source 20/08/2024 - 08:54 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
How to Navigate Breakup Brain: 5 Tips for Getting Through a Breakup
Psychology

How to Navigate Breakup Brain: 5 Tips for Getting Through a Breakup

Psychology Today: The Latest "Breakup brain" can present challenges like mental fog, anxiety, and emotional turmoil. Here's how to best support yourself through the aftermath of a split. Go to Source 16/08/2024 - 21:32 /Britt Frank MSW, LSCSW, SEP Twitter: @hoffeldtcom
Read More
Think Before You Click: Navigating the Digital Health Maze
Psychology

Think Before You Click: Navigating the Digital Health Maze

Psychology Today: The Latest The internet offers instant access to health info, but beware of misleading data. Prioritize reputable sources and remember that social media advice isn't always reliable. Go to Source 16/08/2024 - 21:32 /Georgia Witkin Ph.D. Twitter: @hoffeldtcom
Read More
Privilege in Caregiving
Psychology

Privilege in Caregiving

Psychology Today: The Latest Personal Perspective: We like to believe that everyone has the same set of choices in life, but that is not true, as my dad's situation helped me realize. Go to Source 16/08/2024 - 21:32 /Kristi Rendahl DPA Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Perform generative AI-powered data prep and no-code ML over any size of data using Amazon SageMaker Canvas

AWS Machine Learning Blog Amazon SageMaker Canvas now empowers enterprises to harness the full potential of their data by enabling support of petabyte-scale datasets. Starting today, you can interactively prepare large datasets, create end-to-end data flows, and invoke automated machine learning (AutoML) experiments on petabytes of data—a substantial leap from the previous 5 GB limit. With over 50 connectors, an intuitive Chat for data prep interface, and petabyte support, SageMaker Canvas provides a scalable, low-code/no-code (LCNC) ML solution for handling real-world, enterprise use cases. Organizations often struggle to extract meaningful insights and value from their ever-growing volume of data. You need data engineering expertise and time to develop the proper scripts and pipelines to wrangle, clean, and transform data. Then you must experiment with numerous models and hyperparameters requiring domain expertise. Afterward, you need to manage complex clusters to process and train your ML models over these large-scale datasets. Starting today, you can prepare your petabyte-scale data and explore many ML models with AutoML by chat and with a few clicks. In this post, we show you how you can complete all these steps with the new integration in SageMaker Canvas with Amazon EMR Serverless without writing code. Solution overview For this post, we use a sample dataset of a 33 GB CSV file containing flight purchase transactions from Expedia between April 16, 2022, and October 5, 2022. We use the features to predict the base fare of a ticket based on the flight date, distance, seat type, and others. In the following sections, we demonstrate how to import and prepare the data, optionally export the data, create a model, and run inference, all in SageMaker Canvas. Prerequisites You can follow along by completing the following prerequisites: Set up SageMaker Canvas. Download the dataset from Kaggle and upload it to...
Read More
How Patience Is the Virtue of Remaining in Difficulty
Psychology

How Patience Is the Virtue of Remaining in Difficulty

Psychology Today: The Latest Patience is active, not passive, and without it, we struggle to endure. Go to Source 16/08/2024 - 21:31 /Sabrina B. Little, Ph.D. Twitter: @hoffeldtcom
Read More
Covid deaths in US lower than earlier peaks amid summer surge
Covid-19

Covid deaths in US lower than earlier peaks amid summer surge

Coronavirus | The Guardian Covid not as deadly in 2023 as it was in prior years, falling from the fourth to 10th leading cause of deathCovid continues surging across the US, but deaths are lower than their peaks earlier in the pandemic due in large part to vaccinations and immunity. Yet the country is still struggling to find its footing on vaccination as the virus settles into a pattern of twice-annual surges.Covid was not as deadly in 2023 as it was in prior years, falling from the fourth to the 10th leading cause of death, according to a study by the US Centers for Disease Control and Prevention (CDC). Deaths overall fell by 6% from 2022 to 2023. Continue reading... Go to Source 16/08/2024 - 21:31 /Melody Schreiber Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Delight your customers with great conversational experiences via QnABot, a generative AI chatbot

AWS Machine Learning Blog QnABot on AWS (an AWS Solution) now provides access to Amazon Bedrock foundational models (FMs) and Knowledge Bases for Amazon Bedrock, a fully managed end-to-end Retrieval Augmented Generation (RAG) workflow. You can now provide contextual information from your private data sources that can be used to create rich, contextual, conversational experiences. The advent of generative artificial intelligence (AI) provides organizations unique opportunities to digitally transform customer experiences. Enterprises with contact center operations are looking to improve customer satisfaction by providing self-service, conversational, interactive chat bots that have natural language understanding (NLU). Enterprises want to automate frequently asked transactional questions, provide a friendly conversational interface, and improve operational efficiency. In turn, customers can ask a variety of questions and receive accurate answers powered by generative AI. In this post, we discuss how to use QnABot on AWS to deploy a fully functional chatbot integrated with other AWS services, and delight your customers with human agent like conversational experiences. Solution overview QnABot on AWS is an AWS Solution that enterprises can use to enable a multi-channel, multi-language chatbot with NLU to improve end customer experiences. QnABot provides a flexible, tiered conversational interface empowering enterprises to meet customers where they are and provide accurate responses. Some responses need to be exact (for example, regulated industries like healthcare or capital markets), some responses need to be searched from large, indexed data sources and cited, and some answers need to be generated on the fly, conversationally, based on semantic context. With QnABot on AWS, you can achieve all of the above by deploying the solution using an AWS CloudFormation template, with no coding required. The solution is extensible, uses AWS AI and machine learning (ML) services, and integrates with multiple channels such as voice, web, and text (SMS). QnABot on AWS...
Read More
Expectations: Are We Asking Too Much of Others?
Psychology

Expectations: Are We Asking Too Much of Others?

Psychology Today: The Latest Aligning expectations with individual abilities fosters healthier relationships and reduces stress. Emphasizing strengths, setting realistic goals, and open communication are key. Go to Source 16/08/2024 - 21:31 /Cara Gardenswartz Ph.D. Twitter: @hoffeldtcom
Read More
UK’s National Crime Agency says it is ‘not scared’ of PPE Medpro’s lawyers
Covid-19

UK’s National Crime Agency says it is ‘not scared’ of PPE Medpro’s lawyers

Coronavirus | The Guardian Agency says long-running investigation into company run by Tory peer Michelle Mone’s husband will be concluded as quickly as possibleThe National Crime Agency has said it is “not scared” of lawyers acting for PPE Medpro, the company led by the Conservative peer Michelle Mone’s husband, Doug Barrowman, and is progressing an investigation into it “as fast as we can”.The NCA is conducting a long-running investigation into suspected criminal offences committed in the procurement by PPE Medpro of £203m of government contracts to supply personal protective equipment during the Covid pandemic. Continue reading... Go to Source 16/08/2024 - 21:31 /Emily Dugan Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Introducing document-level sync reports: Enhanced data sync visibility in Amazon Q Business

AWS Machine Learning Blog Amazon Q Business is a fully managed, generative artificial intelligence (AI)-powered assistant that helps enterprises unlock the value of their data and knowledge. With Amazon Q, you can quickly find answers to questions, generate summaries and content, and complete tasks by using the information and expertise stored across your company’s various data sources and enterprise systems. At the core of this capability are native data source connectors that seamlessly integrate and index content from multiple repositories into a unified index. This enables the Amazon Q large language model (LLM) to provide accurate, well-written answers by drawing from the consolidated data and information. The data source connectors act as a bridge, synchronizing content from disparate systems like Salesforce, Jira, and SharePoint into a centralized index that powers the natural language understanding and generative abilities of Amazon Q. Customers appreciate that Amazon Q Business securely connects to over 40 data sources. While using their data source, they want better visibility into the document processing lifecycle during data source sync jobs. They want to know the status of each document they attempted to crawl and index, as well as the ability to troubleshoot why certain documents were not returned with the expected answers. Additionally, they want access to metadata, timestamps, and access control lists (ACLs) for the indexed documents. We are pleased to announce a new feature now available in Amazon Q Business that significantly improves visibility into data source sync operations. The latest release introduces a comprehensive document-level report incorporated into the sync history, providing administrators with granular indexing status, metadata, and ACL details for every document processed during a data source sync job. This enhancement to sync job observability enables administrators to quickly investigate and resolve ingestion or access issues encountered while setting up an Amazon Q...
Read More
Artificial Intelligence

Derive generative AI-powered insights from ServiceNow with Amazon Q Business

AWS Machine Learning Blog Effective customer support, project management, and knowledge management are critical aspects of providing efficient customer relationship management. ServiceNow is a platform for incident tracking, knowledge management, and project management functions for software projects and has become an indispensable part of many organizations’ workflows to ensure success of the customer and the product. However, extracting valuable insights from the vast amount of data stored in ServiceNow often requires manual effort and building specialized tooling. Users such as support engineers, project managers, and product managers need to be able to ask questions about an incident or a customer, or get answers from knowledge articles in order to provide excellent customer support. Organizations use ServiceNow to manage workflows, such as IT services, ticketing systems, configuration management, and infrastructure changes across IT systems. Generative artificial intelligence (AI) provides the ability to take relevant information from a data source such as ServiceNow and provide well-constructed answers back to the user. Building a generative AI-based conversational application integrated with relevant data sources requires an enterprise to invest time, money, and people. First, you need to build connectors to the data sources. Next, you need to index this data to make it available for a Retrieval Augmented Generation (RAG) approach, where relevant passages are delivered with high accuracy to a large language model (LLM). To do this, you need to select an index that provides the capabilities to index the content for semantic and vector search, build the infrastructure to retrieve and rank the answers, and build a feature-rich web application. Additionally, you need to hire and staff a large team to build, maintain, and manage such a system. Amazon Q Business is a fully managed generative AI-powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on...
Read More
Hypervigilance Around Other People’s Emotions and Needs
Psychology

Hypervigilance Around Other People’s Emotions and Needs

Psychology Today: The Latest Those with a history of people-pleasing behavior often have shaky boundaries where they ignore or downplay their own needs in order to put others’ needs ahead of their own. Go to Source 14/08/2024 - 07:34 /Annie Tanasugarn Ph.D., CCTSA Twitter: @hoffeldtcom
Read More
Guiding Your Teen Through the First Year of High School
Psychology

Guiding Your Teen Through the First Year of High School

Psychology Today: The Latest Are you ready for the rollercoaster that is your teen's first year of high school? Learn the tools you'll need to make this journey smoother for everyone involved. Go to Source 14/08/2024 - 07:34 /Hannah Leib LCSW Twitter: @hoffeldtcom
Read More
6 Practices for Our Rootless Lives
Psychology

6 Practices for Our Rootless Lives

Psychology Today: The Latest Many of us feel rootless and disconnected. Imaginative and playful spiritual-ish practices can help change that. Go to Source 14/08/2024 - 07:34 /Keith S. Cox Ph.D. Twitter: @hoffeldtcom
Read More
Can Financial Psychology Help Me?
Psychology

Can Financial Psychology Help Me?

Psychology Today: The Latest Many of us are currently facing financial stress. What can we do about it? Can therapy help? Go to Source 14/08/2024 - 07:34 /Courtney Crisp Psy.D. Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Intelligent healthcare forms analysis with Amazon Bedrock

AWS Machine Learning Blog Generative artificial intelligence (AI) provides an opportunity for improvements in healthcare by combining and analyzing structured and unstructured data across previously disconnected silos. Generative AI can help raise the bar on efficiency and effectiveness across the full scope of healthcare delivery. The healthcare industry generates and collects a significant amount of unstructured textual data, including clinical documentation such as patient information, medical history, and test results, as well as non-clinical documentation like administrative records. This unstructured data can impact the efficiency and productivity of clinical services, because it’s often found in various paper-based forms that can be difficult to manage and process. Streamlining the handling of this information is crucial for healthcare providers to improve patient care and optimize their operations. Handling large volumes of data, extracting unstructured data from multiple paper forms or images, and comparing it with the standard or reference forms can be a long and arduous process, prone to errors and inefficiencies. However, advancements in generative AI solutions have introduced automated approaches that offer a more efficient and reliable solution for comparing multiple documents. Amazon Bedrock is a fully managed service that makes foundation models (FMs) from leading AI startups and Amazon available through an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case. Amazon Bedrock offers a serverless experience, so you can get started quickly, privately customize FMs with your own data, and quickly integrate and deploy them into your applications using the AWS tools without having to manage the infrastructure. In this post, we explore using the Anthropic Claude 3 on Amazon Bedrock large language model (LLM). Amazon Bedrock provides access to several LLMs, such as Anthropic Claude 3, which can be used to generate semi-structured...
Read More
1 2 3 4 5 42

The messages, the text and the photo is belonging to the one who sends out the RSS feed or related to the sender.

error: Content is protected !!