Blog

We collect the key news feed from free RSS services,  the news is updated every 3 hours, 24/7.

Artificial Intelligence

Scale and simplify ML workload monitoring on Amazon EKS with AWS Neuron Monitor container

AWS Machine Learning Blog Amazon Web Services is excited to announce the launch of the AWS Neuron Monitor container, an innovative tool designed to enhance the monitoring capabilities of AWS Inferentia and AWS Trainium chips on Amazon Elastic Kubernetes Service (Amazon EKS). This solution simplifies the integration of advanced monitoring tools such as Prometheus and Grafana, enabling you to set up and manage your machine learning (ML) workflows with AWS AI Chips. With the new Neuron Monitor container, you can visualize and optimize the performance of your ML applications, all within a familiar Kubernetes environment. The Neuron Monitor container can also run on Amazon Elastic Container Service (Amazon ECS), but for the purpose of this post, we primarily discuss Amazon EKS deployment. In addition to the Neuron Monitor container, the release of CloudWatch Container Insights (for Neuron) provides further benefits. This extension provides a robust monitoring solution, offering deeper insights and analytics tailored specifically for Neuron-based applications. With Container Insights, you can now access more granular data and comprehensive analytics, making it effortless for developers to maintain high performance and operational health of their ML workloads. Solution overview The Neuron Monitor container solution provides a comprehensive monitoring framework for ML workloads on Amazon EKS, using the power of Neuron Monitor in conjunction with industry-standard tools like Prometheus, Grafana, and Amazon CloudWatch. By deploying the Neuron Monitor DaemonSet across EKS nodes, developers can collect and analyze performance metrics from ML workload pods. In one flow, metrics gathered by Neuron Monitor are integrated with Prometheus, which is configured using a Helm chart for scalability and ease of management. These metrics are then visualized through Grafana, offering you detailed insights into your applications’ performance for effective troubleshooting and optimization. Alternatively, metrics can also be directed to CloudWatch through the CloudWatch Observability EKS add-on...
Read More
Artificial Intelligence

Build an automated insight extraction framework for customer feedback analysis with Amazon Bedrock and Amazon QuickSight

AWS Machine Learning Blog Extracting valuable insights from customer feedback presents several significant challenges. Manually analyzing and categorizing large volumes of unstructured data, such as reviews, comments, and emails, is a time-consuming process prone to inconsistencies and subjectivity. Scalability becomes an issue as the amount of feedback grows, hindering the ability to respond promptly and address customer concerns. In addition, capturing granular insights, such as specific aspects mentioned and associated sentiments, is difficult. Inefficient routing and prioritization of customer inquiries or issues can lead to delays and dissatisfaction. These pain points highlight the need to streamline the process of extracting insights from customer feedback, enabling businesses to make data-driven decisions and enhance the overall customer experience. Large language models (LLMs) have transformed the way we engage with and process natural language. These powerful models can understand, generate, and analyze text, unlocking a wide range of possibilities across various domains and industries. From customer service and ecommerce to healthcare and finance, the potential of LLMs is being rapidly recognized and embraced. Businesses can use LLMs to gain valuable insights, streamline processes, and deliver enhanced customer experiences. Unlike traditional natural language processing (NLP) approaches, such as classification methods, LLMs offer greater flexibility in adapting to dynamically changing categories and improved accuracy by using pre-trained knowledge embedded within the model. Amazon Bedrock, a fully managed service designed to facilitate the integration of LLMs into enterprise applications, offers a choice of high-performing LLMs from leading artificial intelligence (AI) companies like Anthropic, Mistral AI, Meta, and Amazon through a single API. It provides a broad set of capabilities like model customization through fine-tuning, knowledge base integration for contextual responses, and agents for running complex multi-step tasks across systems. With Amazon Bedrock, developers can experiment, evaluate, and deploy generative AI applications without worrying about infrastructure management....
Read More
Artificial Intelligence

Build safe and responsible generative AI applications with guardrails

AWS Machine Learning Blog Large language models (LLMs) enable remarkably human-like conversations, allowing builders to create novel applications. LLMs find use in chatbots for customer service, virtual assistants, content generation, and much more. However, the implementation of LLMs without proper caution can lead to the dissemination of misinformation, manipulation of individuals, and the generation of undesirable outputs such as harmful slurs or biased content. Enabling guardrails plays a crucial role in mitigating these risks by imposing constraints on LLM behaviors within predefined safety parameters. This post aims to explain the concept of guardrails, underscore their importance, and covers best practices and considerations for their effective implementation using Guardrails for Amazon Bedrock or other tools. Introduction to guardrails for LLMs The following figure shows an example of a dialogue between a user and an LLM. As demonstrated in this example, LLMs are capable of facilitating highly natural conversational experiences. However, it’s also clear that LLMs without appropriate guardrail mechanisms can be problematic. Consider the following levels of risk when building or deploying an LLM-powered application: User-level risk – Conversations with an LLM may generate responses that your end-users find offensive or irrelevant. Without appropriate guardrails, your chatbot application may also state incorrect facts in a convincing manner, a phenomenon known as hallucination. Additionally, the chatbot could go as far as providing ill-advised life or financial recommendations when you don’t take measures to restrict the application domain. Business-level risk – Conversations with a chatbot might veer off-topic into open-ended and controversial subjects that are irrelevant to your business needs or even harmful to your company’s brand. An LLM deployed without guardrails might also create a vulnerability risk for you or your organization. Malicious actors might attempt to manipulate your LLM application into exposing confidential or protected information, or harmful outputs. To mitigate...
Read More
Artificial Intelligence

Improve visibility into Amazon Bedrock usage and performance with Amazon CloudWatch

AWS Machine Learning Blog Amazon Bedrock has enabled customers to build new delightful experiences for their customers using generative artificial intelligence (AI). Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities that you need to build generative AI applications with security, privacy, and responsible AI. With some of the best FMs available at their fingertips within Amazon Bedrock, customers are experimenting and innovating faster than ever before. As customers look to operationalize these new generative AI applications, they also need prescriptive, out-of-the-box ways to monitor the health and performance of these applications. In this blog post, we will share some of capabilities to help you get quick and easy visibility into Amazon Bedrock workloads in context of your broader application. We will use the contextual conversational assistant example in the Amazon Bedrock GitHub repository to provide examples of how you can customize these views to further enhance visibility, tailored to your use case. Specifically, we will describe how you can use the new automatic dashboard in Amazon CloudWatch to get a single pane of glass visibility into the usage and performance of Amazon Bedrock models and gain end-to-end visibility by customizing dashboards with widgets that provide visibility and insights into components and operations such as Retrieval Augmented Generation in your application. Announcing Amazon Bedrock automatic dashboard in CloudWatch CloudWatch has automatic dashboards for customers to quickly gain insights into the health and performance of their AWS services. A new automatic dashboard for Amazon Bedrock was added to provide insights into key metrics for Amazon Bedrock models. To access the new automatic dashboard from the AWS Management Console:...
Read More
Covid-19

PPE worth £1.4bn from single Covd deal destroyed or written off

Coronavirus | The Guardian UK government deal struck at height of pandemic described as ‘colossal misuse of public funds’An estimated £1.4bn worth of personal protective equipment (PPE) bought by the government in single a deal has been destroyed or written off, according to new figures described as the worst example of waste in the Covid pandemic.The figures obtained by the BBC under freedom of information laws showed that 1.57bn items from the NHS supplier Full Support Healthcare will never been used. Continue reading... Go to Source 25/06/2024 - 16:21 /Matthew Weaver Twitter: @hoffeldtcom
Read More
Covid-19

Will there be more air travel chaos this summer?

BBC News Air travel is booming, but last year delays were much worse than pre-pandemic. Will 2024 be the same? Go to Source 25/06/2024 - 09:21 / Twitter: @hoffeldtcom
Read More
Business News

Oracle warns that a TikTok ban would hurt business

US Top News and Analysis Oracle provides cloud services to TikTok, and is warning investors that a ban of the app could hurt the company's revenue. Go to Source 25/06/2024 - 00:28 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Implement exact match with Amazon Lex QnAIntent

AWS Machine Learning Blog This post is a continuation of Creating Natural Conversations with Amazon Lex QnAIntent and Amazon Bedrock Knowledge Base. In summary, we explored new capabilities available through Amazon Lex QnAIntent, powered by Amazon Bedrock, that enable you to harness natural language understanding and your own knowledge repositories to provide real-time, conversational experiences. In many cases, Amazon Bedrock is able to generate accurate responses that meet the needs for a wide variety of questions and scenarios, using your knowledge content. However, some enterprise customers have regulatory requirements or more rigid brand guidelines, requiring certain questions to be answered verbatim with pre-approved responses. For these use cases, Amazon Lex QnAIntent provides exact match capabilities with both Amazon Kendra and Amazon OpenSearch Service knowledge bases. In this post, we walk through how to set up and configure an OpenSearch Service cluster as the knowledge base for your Amazon Lex QnAIntent. In addition, exact match works with Amazon Kendra, and you can create an index and add frequently asked questions to your index. As detailed in Part 1 of this series, you can then select Amazon Kendra as your knowledge base under Amazon Lex QnA Configurations, provide your Amazon Kendra index ID, and select the exact match to let your bot return the exact response returned by Amazon Kendra. Solution Overview In the following sections, we walk through the steps to create an OpenSearch Service domain, create an OpenSearch index and populate it with documents, and test the Amazon Lex bot with QnAIntent. Prerequisites Before creating an OpenSearch Service cluster, you need to create an Amazon Lex V2 bot. If you don’t have an Amazon Lex V2 bot available, complete the following steps: On the Amazon Lex console, choose Bots in the navigation pane. Choose Create bot. Select Start with an...
Read More
Management

How to Keep Employees Engaged When You’re Short-Staffed

15Five Short-staffed organizations don’t have enough people to handle all the work that needs to get done. When that goes on for too long, employees start taking notice and employee engagement suffers. Watching important tasks go undone, having to work through lunch to hit deadlines, and feeling like promotion opportunities have gone the way of the dodo all take their toll on your workforce. No matter how much work you put into your hiring plan or how closely you watch attrition and turnover, your organization will likely be short-staffed at one point or another. When this is the case, you’ll have a larger part to play in maintaining employee engagement on top of an increase in requests and complaints from employees. That’s why HR professionals and managers need to work together to proactively develop employee engagement strategies that can see the organization through these periods. Without a plan in place, employee engagement will continue to steadily decrease. If you’re short-staffed long enough, morale will start to go down, too. Employee engagement software can help with this, but you still need the right plan. Here’s a comprehensive guide on how being short-staffed affects your business and what HR professionals can do to keep engagement from slipping. How does being short-staffed affect your business? Whether it’s due to a wave of layoffs, high turnover, or broader market conditions, short-staffed organizations quickly start to feel the pinch of their situation. After all, your employees might not necessarily know you’re short-staffed with any certainty, but they can quickly recognize the signs, such as: Too much work and not enough people to do it. Increased responsibilities without promises of advancement or promotion. Shelving of once-important projects after reprioritization. Drastic changes in the organization’s overall strategy. Increased monitoring and micromanagement from leaders. Being short-staffed doesn’t just...
Read More
Artificial Intelligence

How Krikey AI harnessed the power of Amazon SageMaker Ground Truth to accelerate generative AI development

AWS Machine Learning Blog This post is co-written with Jhanvi Shriram and Ketaki Shriram from Krikey. Krikey AI is revolutionizing the world of 3D animation with their innovative platform that allows anyone to generate high-quality 3D animations using just text or video inputs, without needing any prior animation experience. At the core of Krikey AI’s offering is their powerful foundation model trained to understand human motion and translate text descriptions into realistic 3D character animations. However, building such a sophisticated artificial intelligence (AI) model requires tremendous amounts of high-quality training data. Krikey AI faced the daunting task of labeling a vast amount of data input containing body motions with descriptive text labels. Manually labeling this dataset in-house was impractical and prohibitively expensive for the startup. But without these rich labels, their customers would be severely limited in the animations they could generate from text inputs. Amazon SageMaker Ground Truth is an AWS managed service that makes it straightforward and cost-effective to get high-quality labeled data for machine learning (ML) models by combining ML and expert human annotation. Krikey AI used SageMaker Ground Truth to expedite the development and implementation of their text-to-animation model. SageMaker Ground Truth provided and managed the labeling workforce, provided advanced data labeling workflows, and automated workflows for human-in-the-loop tasks, enabling Krikey AI to efficiently source precise labels tailored to their needs. SageMaker Ground Truth Implementation As a small startup working to democratize 3D animation through AI, Krikey AI faced the challenge of preparing a large labeled dataset to train their text-to-animation model. Manually labeling each data input with descriptive annotations proved incredibly time-consuming and impractical to do in-house at scale. With customer demand rapidly growing for their AI animation services, Krikey AI needed a way to quickly obtain high-quality labels across diverse and broad categories. Not...
Read More
Business News

Target taps Shopify to add sellers to its third-party marketplace

US Top News and Analysis Target is partnering with Shopify as it looks for ways to drive online traffic and get back to sales growth. Go to Source 24/06/2024 - 12:33 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Helping nonexperts build advanced generative AI models

MIT News - Artificial intelligence The impact of artificial intelligence will never be equitable if there’s only one company that builds and controls the models (not to mention the data that go into them). Unfortunately, today’s AI models are made up of billions of parameters that must be trained and tuned to maximize performance for each use case, putting the most powerful AI models out of reach for most people and companies.MosaicML started with a mission to make those models more accessible. The company, which counts Jonathan Frankle PhD ’23 and MIT Associate Professor Michael Carbin as co-founders, developed a platform that let users train, improve, and monitor open-source models using their own data. The company also built its own open-source models using graphical processing units (GPUs) from Nvidia.The approach made deep learning, a nascent field when MosaicML first began, accessible to far more organizations as excitement around generative AI and large language models (LLMs) exploded following the release of Chat GPT-3.5. It also made MosaicML a powerful complementary tool for data management companies that were also committed to helping organizations make use of their data without giving it to AI companies.Last year, that reasoning led to the acquisition of MosaicML by Databricks, a global data storage, analytics, and AI company that works with some of the largest organizations in the world. Since the acquisition, the combined companies have released one of the highest performing open-source, general-purpose LLMs yet built. Known as DBRX, this model has set new benchmarks in tasks like reading comprehension, general knowledge questions, and logic puzzles.Since then, DBRX has gained a reputation for being one of the fastest open-source LLMs available and has proven especially useful at large enterprises.More than the model, though, Frankle says DBRX is significant because it was built using Databricks tools, meaning...
Read More
Artificial Intelligence

Manage Amazon SageMaker JumpStart foundation model access with private hubs

AWS Machine Learning Blog Amazon SageMaker JumpStart is a machine learning (ML) hub offering pre-trained models and pre-built solutions. It provides access to hundreds of foundation models (FMs). A private hub is a feature in SageMaker JumpStart that allows an organization to share their models and notebooks so as to centralize model artifacts, facilitate discoverability, and increase the reuse within the organization. With new models released daily, many enterprise admins want more control over the FMs that can be discovered and used by users within their organization (for example, only allowing models based on pytorch framework to be discovered). Now enterprise admins can effortlessly configure granular access control over the FMs that SageMaker JumpStart provides out of box so that only allowed models can be accessed by users within their organizations. In this post, we discuss the steps required for an administrator to configure granular access control of models in SageMaker JumpStart using a private hub, as well as the steps for users to access and consume models from the private hub. Solution overview Starting today, with SageMaker JumpStart and its private hub feature, administrators can create repositories for a subset of models tailored to different teams, use cases, or license requirements using the Amazon SageMaker Python SDK. Admins can also set up multiple private hubs with different lists of models discoverable for different groups of users. Users are then only able to discover and use models within the private hubs they have access to through Amazon SageMaker Studio and the SDK. This level of control empowers enterprises to consume the latest in open weight generative artificial intelligence (AI) development while enforcing governance guardrails. Finally, admins can share access to private hubs across multiple AWS accounts, enabling collaborative model management while maintaining centralized control. SageMaker JumpStart uses AWS Resource Access...
Read More
Management

Subjective vs. Objective Performance Review Feedback: Which Works Better?

15Five Managing employees is more than a quick conversation and vague goal setting. To maximize individual and team performance, you need to learn how to deliver performance review feedback that helps people clearly understand their strengths, weaknesses, areas for improvement, and, most importantly, how they can succeed in their job roles.  There are two types of feedback in the workplace: objective and subjective. Each has a purpose for delivering effective employee feedback during a performance review, but the type of feedback you share will depend on the subject matter and the employee.  Objective feedback helps employees clearly see their performance in relation to metrics, while subjective feedback acknowledges soft skills and unique contributions that can’t be measured by numbers or metrics. For example, a data scientist is more likely to receive objective feedback, whereas a creative copywriter is more likely to hear subjective feedback. Understanding the differences: subjective vs. objective feedback Objective feedback Objective feedback is based on observable and measurable facts. It aims to be unbiased, impartial, and free of personal feelings and opinions while using quantitative statistics and data to form the basis of the feedback. The goal of objective feedback is to provide insights that are: Clear Specific Actionable Fair Measurable By ensuring that feedback is clear and specific, there’s no “gray area,” and employees should understand exactly what they’re doing well and areas that need improvement. Measurable objective feedback also promotes consistency and fairness while helping minimize biases because employees are all measured on the same criteria. However, objective feedback sometimes overlooks performance management’s personal or emotional aspects. It can also come across as too structured by neglecting creativity or interpersonal skills. Relying too heavily on data can also lead to missing situational context, which can impact employee happiness because individuals whose contributions aren’t easily measured...
Read More
Artificial Intelligence

eSentire delivers private and secure generative AI interactions to customers with Amazon SageMaker

AWS Machine Learning Blog eSentire is an industry-leading provider of Managed Detection & Response (MDR) services protecting users, data, and applications of over 2,000 organizations globally across more than 35 industries. These security services help their customers anticipate, withstand, and recover from sophisticated cyber threats, prevent disruption from malicious attacks, and improve their security posture. In 2023, eSentire was looking for ways to deliver differentiated customer experiences by continuing to improve the quality of its security investigations and customer communications. To accomplish this, eSentire built AI Investigator, a natural language query tool for their customers to access security platform data by using AWS generative artificial intelligence (AI) capabilities. In this post, we share how eSentire built AI Investigator using Amazon SageMaker to provide private and secure generative AI interactions to their customers. Benefits of AI Investigator Before AI Investigator, customers would engage eSentire’s Security Operation Center (SOC) analysts to understand and further investigate their asset data and associated threat cases. This involved manual effort for customers and eSentire analysts, forming questions and searching through data across multiple tools to formulate answers. eSentire’s AI Investigator enables users to complete complex queries using natural language by joining multiple sources of data from each customer’s own security telemetry and eSentire’s asset, vulnerability, and threat data mesh. This helps customers quickly and seamlessly explore their security data and accelerate internal investigations. Providing AI Investigator internally to the eSentire SOC workbench has also accelerated eSentire’s investigation process by improving the scale and efficacy of multi-telemetry investigations. The LLM models augment SOC investigations with knowledge from eSentire’s security experts and security data, enabling higher-quality investigation outcomes while also reducing time to investigate. Over 100 SOC analysts are now using AI Investigator models to analyze security data and provide rapid investigation conclusions. Solution overview eSentire customers expect...
Read More
Business News

SingPost appoints financial adviser for strategic review of Australia businesses

The Straits Times Business News It plans to achieve scale in Australia by exploring near-term partnerships that can contribute to growth. Go to Source 21/06/2024 - 03:14 / Twitter: @hoffeldtcom
Read More
Management

What is Continuous Performance Management?

15Five Your annual performance reviews aren’t cutting it anymore. Only 14% of your employees strongly agree that their performance reviews inspire them to improve, according to Gallup data. Traditional performance management has long been a top-down process, where employees would meet with their manager one to four times a year. This mostly involved managers talking “at them” about their strengths, weaknesses, and where they needed to improve before the next review. The result of these reviews? A one-sentence description of the employee’s performance: exceeds expectations, meets expectations, or does not meet expectations. Employees might also get an answer about whether you can expect a raise in the future or not. No wonder they don’t work. By contrast, continuous performance management fills the gaps between your annual or quarterly performance reviews while encouraging two-way conversations instead of monologues.  More frequent check-ins—sometimes no less than a month apart—allow managers to guide employees along their growth plan and give feedback they can actually work with. Even better, some organizations using this sort of performance management share feedback as often as daily, giving employees a better understanding of their performance on day-to-day work. Continuous performance management also involves using the appropriate tools to track performance over time, saving the manual work that would usually be involved in this process. So why continuous performance management? And how should your organization implement it? Let’s dive in. 8 benefits of continuous performance management Continuous performance management comes with many benefits, especially when compared to its traditional counterpart. But don’t be fooled; employees aren’t the only ones who benefit. Your organization can see some significant improvements from implementing this process. Here are just a few of them. Better employee engagement: As noted previously, few employees find traditional performance management particularly inspiring. Conversely, getting the daily feedback that’s usually...
Read More
Artificial Intelligence

Eric Evans receives Department of Defense Medal for Distinguished Public Service

MIT News - Artificial intelligence On May 31, the U.S. Department of Defense's chief technology officer, Under Secretary of Defense for Research and Engineering Heidi Shyu, presented Eric Evans with the Department of Defense (DoD) Medal for Distinguished Public Service. This award is the highest honor given by the secretary of defense to private citizens for their significant service to the DoD. Evans was selected for his leadership as director of MIT Lincoln Laboratory and as vice chair and chair of the Defense Science Board (DSB)."I have gotten to know Eric well in the last three years, and I greatly appreciate his leadership, proactiveness, vision, intellect, and humbleness," Shyu stated in her remarks during the May 31 ceremony held at the laboratory. "Eric has a willingness and ability to confront and solve the most difficult problems for national security. His distinguished public service will continue to have invaluable impacts on the department and the nation for decades to come." During his tenure in both roles over more than a decade, Evans has cultivated relationships at the highest levels within the DoD. Since stepping into his role as laboratory director in 2006, he has advised eight defense secretaries and seven deputy defense secretaries. Under his leadership, the laboratory delivered advanced capabilities for national security in a broad range of technology areas, including cybersecurity, space surveillance, biodefense, artificial intelligence, laser communications, and quantum computing.Evans ensured that the laboratory addressed not only existing DoD priorities, but also emerging and future threats. He foresaw the need for and established three new technical divisions covering Cyber Security and Information Sciences, Homeland Protection, and Biotechnology and Human Systems. When the Covid-19 pandemic struck, he quickly pivoted the laboratory to aid the national response. To ensure U.S. competitiveness in an ever-evolving defense landscape, he advocated for the modernization of major...
Read More
Artificial Intelligence

Imperva optimizes SQL generation from natural language using Amazon Bedrock

AWS Machine Learning Blog This is a guest post co-written with Ori Nakar from Imperva. Imperva Cloud WAF protects hundreds of thousands of websites against cyber threats and blocks billions of security events every day. Counters and insights based on security events are calculated daily and used by users from multiple departments. Millions of counters are added daily, together with 20 million insights updated daily to spot threat patterns. Our goal was to improve the user experience of an existing application used to explore the counters and insights data. The data is stored in a data lake and retrieved by SQL using Amazon Athena. As part of our solution, we replaced multiple search fields with a single free text field. We used a large language model (LLM) with query examples to make the search work using the language used by Imperva internal users (business analysts). The following figure shows a search query that was translated to SQL and run. The results were later formatted as a chart by the application. We have many types of insights—global, industry, and customer level insights used by multiple departments such as marketing, support, and research. Data was made available to our users through a simplified user experience powered by an LLM. Figure 1: Insights search by natural language Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading artificial intelligence (AI) companies such as AI21 Labs, Anthropic, Cohere, Meta, Mistral, Stability AI, and Amazon within a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Amazon Bedrock Studio is a new single sign-on (SSO)-enabled web interface that provides a way for developers across an organization to experiment with LLMs and other FMs, collaborate...
Read More
Artificial Intelligence

Create natural conversations with Amazon Lex QnAIntent and Knowledge Bases for Amazon Bedrock

AWS Machine Learning Blog Customer service organizations today face an immense opportunity. As customer expectations grow, brands have a chance to creatively apply new innovations to transform the customer experience. Although meeting rising customer demands poses challenges, the latest breakthroughs in conversational artificial intelligence (AI) empowers companies to meet these expectations. Customers today expect timely responses to their questions that are helpful, accurate, and tailored to their needs. The new QnAIntent, powered by Amazon Bedrock, can meet these expectations by understanding questions posed in natural language and responding conversationally in real time using your own authorized knowledge sources. Our Retrieval Augmented Generation (RAG) approach allows Amazon Lex to harness both the breadth of knowledge available in repositories as well as the fluency of large language models (LLMs). Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. In this post, we show you how to add generative AI question answering capabilities to your bots. This can be done using your own curated knowledge sources, and without writing a single line of code. Read on to discover how QnAIntent can transform your customer experience. Solution overview Implementing the solution consists of the following high-level steps: Create an Amazon Lex bot. Create an Amazon Simple Storage Service (Amazon S3) bucket and upload a PDF file that contains the information used to answer questions. Create a knowledge base that will split your data into chunks and generate embeddings using the Amazon Titan Embeddings model. As part of this process, Knowledge Bases for Amazon Bedrock automatically creates an Amazon OpenSearch...
Read More
Artificial Intelligence

Evaluate the reliability of Retrieval Augmented Generation applications using Amazon Bedrock

AWS Machine Learning Blog Retrieval Augmented Generation (RAG) is a technique that enhances large language models (LLMs) by incorporating external knowledge sources. It allows LLMs to reference authoritative knowledge bases or internal repositories before generating responses, producing output tailored to specific domains or contexts while providing relevance, accuracy, and efficiency. RAG achieves this enhancement without retraining the model, making it a cost-effective solution for improving LLM performance across various applications. The following diagram illustrates the main steps in a RAG system. Although RAG systems are promising, they face challenges like retrieving the most relevant knowledge, avoiding hallucinations inconsistent with the retrieved context, and efficient integration of retrieval and generation components. In addition, RAG architecture can lead to potential issues like retrieval collapse, where the retrieval component learns to retrieve the same documents regardless of the input. A similar problem occurs for some tasks like open-domain question answering—there are often multiple valid answers available in the training data, therefore the LLM could choose to generate an answer from its training data. Another challenge is the need for an effective mechanism to handle cases where no useful information can be retrieved for a given input. Current research aims to improve these aspects for more reliable and capable knowledge-grounded generation. Given these challenges faced by RAG systems, monitoring and evaluating generative artificial intelligence (AI) applications powered by RAG is essential. Moreover, tracking and analyzing the performance of RAG-based applications is crucial, because it helps assess their effectiveness and reliability when deployed in real-world scenarios. By evaluating RAG applications, you can understand how well the models are using and integrating external knowledge into their responses, how accurately they can retrieve relevant information, and how coherent the generated outputs are. Additionally, evaluation can identify potential biases, hallucinations, inconsistencies, or factual errors that may arise from...
Read More
Artificial Intelligence

Connect to Amazon services using AWS PrivateLink in Amazon SageMaker

AWS Machine Learning Blog AWS customers that implement secure development environments often have to restrict outbound and inbound internet traffic. This becomes increasingly important with artificial intelligence (AI) development because of the data assets that need to be protected. Transmitting data across the internet is not secure enough for highly sensitive data. Therefore, accessing AWS services without leaving the AWS network can be a secure workflow. One of the ways you can secure AI development is by creating Amazon SageMaker instances within a virtual private cloud (VPC) with direct internet access disabled. This isolates the instance from the internet and makes API calls to other AWS services not possible. This presents a challenge for developers that are building architectures for production in which many AWS services need to function together. In this post, we present a solution for configuring SageMaker notebook instances to connect to Amazon Bedrock and other AWS services with the use of AWS PrivateLink and Amazon Elastic Compute Cloud (Amazon EC2) security groups. Solution overview The following example architecture shows a SageMaker instance connecting to various services. The SageMaker instance is isolated from the internet but is still able to access AWS services through PrivateLink. One will notice that the connection to Amazon S3 is through a Gateway VPC endpoint. You can learn more about Gateway VPC endpoints here. In the following sections, we show how to configure this on the AWS Management Console. Create security groups for outbound and inbound endpoint access First, you have to create the security groups that will be attached to the VPC endpoints and the SageMaker instance. You create the security groups before creating a SageMaker instance because after the instance has been created, the security group configuration can’t be changed. You create two groups, one for outbound and another for...
Read More
Covid-19

Economist suggests storing grain to prepare for next global emergency

Coronavirus | The Guardian Isabella Weber, who linked corporate profits to inflation, shares how to prevent food shortages – and price gougingTell us: how has inflation changed the way you grocery shop?Isabella Weber, the economist who ignited controversy with a bold proposal to implement strategic price controls at the peak of inflation and identified corporate profits as a driver of high prices, has proposed a new measure that could prevent food shortages and price gouging in the wake of another disruption of the global supply chains.Weber’s new paper, published on Thursday, looks at how grain prices spiked in 2022 as Covid snagged supply chains and Russia invaded Ukraine. The price hikes helped to drive record profits for corporations while pushing inflation higher and increasing global hunger. In the paper, Weber and colleagues call for the creation of buffer stocks of grain that could be released during shortages or emergencies to ease price pressures. Continue reading... Go to Source 20/06/2024 - 12:13 /Tom Perkins Twitter: @hoffeldtcom
Read More
Business News

New Zealand exits recession but economy remains weak amid steep borrowing costs

The Straits Times Business News Strong population growth has masked how weak its economy has been, said an economist. Go to Source 20/06/2024 - 03:29 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Maximize your Amazon Translate architecture using strategic caching layers

AWS Machine Learning Blog Amazon Translate is a neural machine translation service that delivers fast, high quality, affordable, and customizable language translation. Amazon Translate supports 75 languages and 5,550 language pairs. For the latest list, see the Amazon Translate Developer Guide. A key benefit of Amazon Translate is its speed and scalability. It can translate a large body of content or text passages in batch mode or translate content in real-time through API calls. This helps enterprises get fast and accurate translations across massive volumes of content including product listings, support articles, marketing collateral, and technical documentation. When content sets have phrases or sentences that are often repeated, you can optimize cost by implementing a write-through caching layer. For example, product descriptions for items contain many recurring terms and specifications. This is where implementing a translation cache can significantly reduce costs. The caching layer stores source content and its translated text. Then, when the same source content needs to be translated again, the cached translation is simply reused instead of paying for a brand-new translation. In this post, we explain how setting up a cache for frequently accessed translations can benefit organizations that need scalable, multi-language translation across large volumes of content. You’ll learn how to build a simple caching mechanism for Amazon Translate to accelerate turnaround times. Solution overview The caching solution uses Amazon DynamoDB to store translations from Amazon Translate. DynamoDB functions as the cache layer. When a translation is required, the application code first checks the cache—the DynamoDB table—to see if the translation is already cached. If a cache hit occurs, the stored translation is read from DynamoDB with no need to call Amazon Translate again. If the translation isn’t cached in DynamoDB (a cache miss), then the Amazon Translate API will be called to perform the...
Read More
Covid-19

Covid immune response study could explain why some escape infection

Coronavirus | The Guardian Subjects who kept virus at bay showed rapid response in nasal immune cells and more activity in early-alert geneScientists have discovered differences in the immune response that could explain why some people seem to reliably escape Covid infection.The study, in which healthy adults were intentionally given a small nasal dose of Covid virus, suggested that specialised immune cells in the nose could see off the virus at the earliest stage before full infection takes hold. Those who did not succumb to infection also had high levels of activity in a gene that is thought to help flag the presence of viruses to the immune system. Continue reading... Go to Source 19/06/2024 - 18:23 /Hannah Devlin Science correspondent Twitter: @hoffeldtcom
Read More
Covid-19

Washington Post publisher alleged to have advised Boris Johnson to ‘clean up’ phone during Partygate Covid scandal

Coronavirus | The Guardian Sources’ claim suggests advice by Will Lewis, an informal adviser to then prime minister, contradicted instructions to staffWill Lewis, the Washington Post publisher, advised Boris Johnson and senior officials at 10 Downing Street to “clean up” their phones in the midst of a Covid-era political scandal, according to claims by three people familiar with the operations inside No 10 at the time.The advice is alleged to have been given in December 2021 and January 2022 as top officials were under scrutiny for potential violations of pandemic restrictions, which was known as “Partygate”. Continue reading... Go to Source 19/06/2024 - 18:23 /Anna Isaac in London and Stephanie Kirchgaessner in Washington Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Deploy a Slack gateway for Amazon Bedrock

AWS Machine Learning Blog In today’s fast-paced digital world, streamlining workflows and boosting productivity are paramount. That’s why we’re thrilled to share an exciting integration that will take your team’s collaboration to new heights. Get ready to unlock the power of generative artificial intelligence (AI) and bring it directly into your Slack workspace. Imagine the possibilities: Quick and efficient brainstorming sessions, real-time ideation, and even drafting documents or code snippets—all powered by the latest advancements in AI. Say goodbye to context switching and hello to a streamlined, collaborative experience that will supercharge your team’s productivity. Whether you’re leading a dynamic team, working on complex projects, or simply looking to enhance your Slack experience, this integration is a game-changer. In this post, we show you how to unlock new levels of efficiency and creativity by bringing the power of generative AI directly into your Slack workspace using Amazon Bedrock. Solution overview Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. In the following sections, we guide you through the process of setting up a Slack integration for Amazon Bedrock. We show how to create a Slack application, configure the necessary permissions, and deploy the required resources using AWS CloudFormation. The following diagram illustrates the solution architecture. The workflow consists of the following steps: The user communicates with the Slack application. The Slack application sends the event to Amazon API Gateway, which is used in the event subscription. API Gateway forwards the event to an AWS Lambda function. The Lambda function invokes Amazon Bedrock with the request, then responds...
Read More
Business News

Bullish outlook on economic growth in Cambodia spurs FDI from S’pore companies

The Straits Times Business News Cambodia-Singapore Business Forum told green energy, healthcare and agri-food among promising sectors. Go to Source 19/06/2024 - 15:08 / Twitter: @hoffeldtcom
Read More
Business News

Bank of England set to hold interest rates despite inflation hitting 2% target

US Top News and Analysis Services inflation and wage growth both remain significantly above the level desired by monetary policymakets. Go to Source 19/06/2024 - 12:13 / Twitter: @hoffeldtcom
Read More
Business News

Singapore’s growth momentum is challenged by weak global outlook, says report

The Straits Times Business News The country’s economy is set to expand by 2 per cent in 2024, said Oxford Economics. Go to Source 19/06/2024 - 09:19 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

MIT-Takeda Program wraps up with 16 publications, a patent, and nearly two dozen projects completed

MIT News - Artificial intelligence When the Takeda Pharmaceutical Co. and the MIT School of Engineering launched their collaboration focused on artificial intelligence in health care and drug development in February 2020, society was on the cusp of a globe-altering pandemic and AI was far from the buzzword it is today.As the program concludes, the world looks very different. AI has become a transformative technology across industries including health care and pharmaceuticals, while the pandemic has altered the way many businesses approach health care and changed how they develop and sell medicines.For both MIT and Takeda, the program has been a game-changer.When it launched, the collaborators hoped the program would help solve tangible, real-world problems. By its end, the program has yielded a catalog of new research papers, discoveries, and lessons learned, including a patent for a system that could improve the manufacturing of small-molecule medicines.Ultimately, the program allowed both entities to create a foundation for a world where AI and machine learning play a pivotal role in medicine, leveraging Takeda’s expertise in biopharmaceuticals and the MIT researchers’ deep understanding of AI and machine learning.“The MIT-Takeda Program has been tremendously impactful and is a shining example of what can be accomplished when experts in industry and academia work together to develop solutions,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “In addition to resulting in research that has advanced how we use AI and machine learning in health care, the program has opened up new opportunities for MIT faculty and students through fellowships, funding, and networking.”What made the program unique was that it was centered around several concrete challenges spanning drug development that Takeda needed help addressing. MIT faculty had the opportunity to select the projects based on...
Read More
Artificial Intelligence

Improving air quality with generative AI

AWS Machine Learning Blog As of this writing, Ghana ranks as the 27th most polluted country in the world, facing significant challenges due to air pollution. Recognizing the crucial role of air quality monitoring, many African countries, including Ghana, are adopting low-cost air quality sensors. The Sensor Evaluation and Training Centre for West Africa (Afri-SET), aims to use technology to address these challenges. Afri-SET engages with air quality sensor manufacturers, providing crucial evaluations tailored to the African context. Through evaluations of sensors and informed decision-making support, Afri-SET empowers governments and civil society for effective air quality management. On December 6th-8th 2023, the non-profit organization, Tech to the Rescue, in collaboration with AWS, organized the world’s largest Air Quality Hackathon – aimed at tackling one of the world’s most pressing health and environmental challenges, air pollution. More than 170 tech teams used the latest cloud, machine learning and artificial intelligence technologies to build 33 solutions. The solution addressed in this blog solves Afri-SET’s challenge and was ranked as the top 3 winning solutions. This post presents a solution that uses a generative artificial intelligence (AI) to standardize air quality data from low-cost sensors in Africa, specifically addressing the air quality data integration problem of low-cost sensors. The solution harnesses the capabilities of generative AI, specifically Large Language Models (LLMs), to address the challenges posed by diverse sensor data and automatically generate Python functions based on various data formats. The fundamental objective is to build a manufacturer-agnostic database, leveraging generative AI’s ability to standardize sensor outputs, synchronize data, and facilitate precise corrections. Current challenges Afri-SET currently merges data from numerous sources, employing a bespoke approach for each of the sensor manufacturers. This manual synchronization process, hindered by disparate data formats, is resource-intensive, limiting the potential for widespread data orchestration. The platform, although...
Read More
Artificial Intelligence

Use zero-shot large language models on Amazon Bedrock for custom named entity recognition

AWS Machine Learning Blog Name entity recognition (NER) is the process of extracting information of interest, called entities, from structured or unstructured text. Manually identifying all mentions of specific types of information in documents is extremely time-consuming and labor-intensive. Some examples include extracting players and positions in an NFL game summary, products mentioned in an AWS keynote transcript, or key names from an article on a favorite tech company. This process must be repeated for every new document and entity type, making it impractical for processing large volumes of documents at scale. With more access to vast amounts of reports, books, articles, journals, and research papers than ever before, swiftly identifying desired information in large bodies of text is becoming invaluable. Traditional neural network models like RNNs and LSTMs and more modern transformer-based models like BERT for NER require costly fine-tuning on labeled data for every custom entity type. This makes adopting and scaling these approaches burdensome for many applications. However, new capabilities of large language models (LLMs) enable high-accuracy NER across diverse entity types without the need for entity-specific fine-tuning. By using the model’s broad linguistic understanding, you can perform NER on the fly for any specified entity type. This capability is called zero-shot NER and enables the rapid deployment of NER across documents and many other use cases. This ability to extract specified entity mentions without costly tuning unlocks scalable entity extraction and downstream document understanding. In this post, we cover the end-to-end process of using LLMs on Amazon Bedrock for the NER use case. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading artificial intelligence (AI) companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set...
Read More
Artificial Intelligence

Streamline financial workflows with generative AI for email automation

AWS Machine Learning Blog Many companies across all industries still rely on laborious, error-prone, manual procedures to handle documents, especially those that are sent to them by email. Despite the availability of technology that can digitize and automate document workflows through intelligent automation, businesses still mostly rely on labor-intensive manual document processing. This represents a major opportunity for businesses to optimize this workflow, save time and money, and improve accuracy by modernizing antiquated manual document handling with intelligent document processing (IDP) on AWS. To extract key information from high volumes of documents from emails and various sources, companies need comprehensive automation capable of ingesting emails, file uploads, and system integrations for seamless processing and analysis. Intelligent automation presents a chance to revolutionize document workflows across sectors through digitization and process optimization. This post explains a generative artificial intelligence (AI) technique to extract insights from business emails and attachments. It examines how AI can optimize financial workflow processes by automatically summarizing documents, extracting data, and categorizing information from email attachments. This enables companies to serve more clients, direct employees to higher-value tasks, speed up processes, lower expenses, enhance data accuracy, and increase efficiency. Challenges with manual data extraction The majority of business sectors are currently having difficulties with manual document processing, and are reading emails and their attachments without the use of an automated system. These procedures cost money, take a long time, and are prone to mistakes. Manual procedures struggle to keep up with the number of documents. Finding relevant information that is necessary for business decisions is difficult. Therefore, there is a demand for shorter decision cycles and speedier document processing. The aim of this post is to help companies that process documents manually to speed up the delivery of data derived from those documents for use in business...
Read More
Covid-19

Global failure to prepare for pandemics ‘gambling with children’s future’

Coronavirus | The Guardian Lessons from Ebola and Covid were not learned, say Helen Clark and Ellen Johnson Sirleaf as they launch report calling for urgent actionWorld leaders are “gambling with their children’s and grandchildren’s health and wellbeing” by failing to prepare for a future pandemic, a new report warns.Amid surging cases of H5N1 bird flu in mammals, and an mpox outbreak in central Africa, two senior stateswomen have said the lack of preparation had left the world vulnerable to “devastation”. Continue reading... Go to Source 18/06/2024 - 15:43 /Kat Lay, Global health correspondent Twitter: @hoffeldtcom
Read More
Business News

Singtel-KKR consortium to invest $1.75 billion in data centre provider ST Telemedia GDC

The Straits Times Business News The company's regional data centre business, Nxera, will also be partnering Malaysia telco TM. Go to Source 18/06/2024 - 13:39 / Twitter: @hoffeldtcom
Read More
Management

In Her Own Words: Tansy McNulty aims to end police violence

Human Resources News - Human Resources News Headlines | Bizjournals.com Career decisions often mark time as well as place — current events as well as personal ones. Tansy McNulty’s experience in supply chain management strengthened her role as an advocate, but the deaths of three men she didn’t know inspired her move from corporate to community. In the Summer of 2016, while on a babymoon with my husband, we disconnected from the world. We returned to news of the murders of Alton Sterling, Philando Castile, and Ronnie Shumpert. It shook my world as I was carrying… Go to Source 18/06/2024 - 12:23 /Ellen Sherberg Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Researchers leverage shadows to model 3D scenes, including objects blocked from view

MIT News - Artificial intelligence Imagine driving through a tunnel in an autonomous vehicle, but unbeknownst to you, a crash has stopped traffic up ahead. Normally, you’d need to rely on the car in front of you to know you should start braking. But what if your vehicle could see around the car ahead and apply the brakes even sooner?Researchers from MIT and Meta have developed a computer vision technique that could someday enable an autonomous vehicle to do just that.They have introduced a method that creates physically accurate, 3D models of an entire scene, including areas blocked from view, using images from a single camera position. Their technique uses shadows to determine what lies in obstructed portions of the scene.They call their approach PlatoNeRF, based on Plato’s allegory of the cave, a passage from the Greek philosopher’s “Republic” in which prisoners chained in a cave discern the reality of the outside world based on shadows cast on the cave wall.By combining lidar (light detection and ranging) technology with machine learning, PlatoNeRF can generate more accurate reconstructions of 3D geometry than some existing AI techniques. Additionally, PlatoNeRF is better at smoothly reconstructing scenes where shadows are hard to see, such as those with high ambient light or dark backgrounds.In addition to improving the safety of autonomous vehicles, PlatoNeRF could make AR/VR headsets more efficient by enabling a user to model the geometry of a room without the need to walk around taking measurements. It could also help warehouse robots find items in cluttered environments faster.“Our key idea was taking these two things that have been done in different disciplines before and pulling them together — multibounce lidar and machine learning. It turns out that when you bring these two together, that is when you find a lot of new opportunities to...
Read More
Business News

Low investment blocking UK growth, says think tank

BBC News Both Conservative and Labour plan to reduce government investment over the next parliamentary term Go to Source 18/06/2024 - 06:23 / Twitter: @hoffeldtcom
Read More
Business News

Singapore’s key exports dip 0.1% in May, mildest decline in 20 months

The Straits Times Business News Electronic exports posted the first double-digit growth in 22 months. Go to Source 18/06/2024 - 03:28 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Understanding the visual knowledge of language models

MIT News - Artificial intelligence You’ve likely heard that a picture is worth a thousand words, but can a large language model (LLM) get the picture if it’s never seen images before?As it turns out, language models that are trained purely on text have a solid understanding of the visual world. They can write image-rendering code to generate complex scenes with intriguing objects and compositions — and even when that knowledge is not used properly, LLMs can refine their images. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) observed this when prompting language models to self-correct their code for different images, where the systems improved on their simple clipart drawings with each query.The visual knowledge of these language models is gained from how concepts like shapes and colors are described across the internet, whether in language or code. When given a direction like “draw a parrot in the jungle,” users jog the LLM to consider what it’s read in descriptions before. To assess how much visual knowledge LLMs have, the CSAIL team constructed a “vision checkup” for LLMs: using their “Visual Aptitude Dataset,” they tested the models’ abilities to draw, recognize, and self-correct these concepts. Collecting each final draft of these illustrations, the researchers trained a computer vision system that identifies the content of real photos.“We essentially train a vision system without directly using any visual data,” says Tamar Rott Shaham, co-lead author of the study and an MIT electrical engineering and computer science (EECS) postdoc at CSAIL. “Our team queried language models to write image-rendering codes to generate data for us and then trained the vision system to evaluate natural images. We were inspired by the question of how visual concepts are represented through other mediums, like text. To express their visual knowledge, LLMs can use code as...
Read More
Artificial Intelligence

How Twilio used Amazon SageMaker MLOps pipelines with PrestoDB to enable frequent model retraining and optimized batch transform

AWS Machine Learning Blog This post is co-written with Shamik Ray, Srivyshnav K S, Jagmohan Dhiman and Soumya Kundu from Twilio. Today’s leading companies trust Twilio’s Customer Engagement Platform (CEP) to build direct, personalized relationships with their customers everywhere in the world. Twilio enables companies to use communications and data to add intelligence and security to every step of the customer journey, from sales and marketing to growth and customer service, and many more engagement use cases in a flexible, programmatic way. Across 180 countries, millions of developers and hundreds of thousands of businesses use Twilio to create magical experiences for their customers. Being one of the largest AWS customers, Twilio engages with data and artificial intelligence and machine learning (AI/ML) services to run their daily workloads. This post outlines the steps AWS and Twilio took to migrate Twilio’s existing machine learning operations (MLOps), the implementation of training models, and running batch inferences to Amazon SageMaker. ML models don’t operate in isolation. They must integrate into existing production systems and infrastructure to deliver value. This necessitates considering the entire ML lifecycle during design and development. With the right processes and tools, MLOps enables organizations to reliably and efficiently adopt ML across their teams for their specific use cases. SageMaker includes a suite of features for MLOps that includes Amazon SageMaker Pipelines and Amazon SageMaker Model Registry. Pipelines allow for straightforward creation and management of ML workflows while also offering storage and reuse capabilities for workflow steps. The model registry simplifies model deployment by centralizing model tracking. This post focuses on how to achieve flexibility in using your data source of choice and integrate it seamlessly with Amazon SageMaker Processing jobs. With SageMaker Processing jobs, you can use a simplified, managed experience to run data preprocessing or postprocessing and model evaluation...
Read More
Business News

How immigrants are helping keep job growth hot while inflation cools

US Top News and Analysis Recent spikes in immigration at the southern border and elsewhere in the U.S. have helped to keep the labor pool full, even as job gains kept apace. Go to Source 17/06/2024 - 15:18 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

A smarter way to streamline drug discovery

MIT News - Artificial intelligence The use of AI to streamline drug discovery is exploding. Researchers are deploying machine-learning models to help them identify molecules, among billions of options, that might have the properties they are seeking to develop new medicines.But there are so many variables to consider — from the price of materials to the risk of something going wrong — that even when scientists use AI, weighing the costs of synthesizing the best candidates is no easy task.The myriad challenges involved in identifying the best and most cost-efficient molecules to test is one reason new medicines take so long to develop, as well as a key driver of high prescription drug prices.To help scientists make cost-aware choices, MIT researchers developed an algorithmic framework to automatically identify optimal molecular candidates, which minimizes synthetic cost while maximizing the likelihood candidates have desired properties. The algorithm also identifies the materials and experimental steps needed to synthesize these molecules.Their quantitative framework, known as Synthesis Planning and Rewards-based Route Optimization Workflow (SPARROW), considers the costs of synthesizing a batch of molecules at once, since multiple candidates can often be derived from some of the same chemical compounds.Moreover, this unified approach captures key information on molecular design, property prediction, and synthesis planning from online repositories and widely used AI tools.Beyond helping pharmaceutical companies discover new drugs more efficiently, SPARROW could be used in applications like the invention of new agrichemicals or the discovery of specialized materials for organic electronics.“The selection of compounds is very much an art at the moment — and at times it is a very successful art. But because we have all these other models and predictive tools that give us information on how molecules might perform and how they might be synthesized, we can and should be using that information...
Read More
Covid-19

Boss of US firm given £4bn in UK Covid contracts accused of squandering millions on jets and properties

Coronavirus | The Guardian Rishi Sunak’s team helped fast-track deal with firm founded by Charles Huang, who says contracts generated $2bn profitIn California, state of sunshine and palm trees, a small group of men are locked in a big legal fight over the money made by a US company selling Covid tests to the British government. The founder of Innova Medical Group says his business collected $2bn (£1.6bn) in profits, one of the largest fortunes banked by any medical supplier during the scramble for lifesaving equipment in the early months of the pandemic.In a storm of claims and counter-claims, Innova’s boss, Charles Huang, is accused by former associates of “squandering” or moving $1bn of those profits, spending lavishly on luxury aircraft, an $18m house in Los Angeles and “homes for his mistresses”. Continue reading... Go to Source 17/06/2024 - 13:09 /David Conn and Russell Scott Twitter: @hoffeldtcom
Read More
Management

Longtime talent manager: The secret to having everything is …

Human Resources News - Human Resources News Headlines | Bizjournals.com if you don’t experience some discomfort, you’re probably not going to drive change, says Sharon Randaccio of Performance Management Partners Inc. Go to Source 17/06/2024 - 12:15 /Lian Bunny Twitter: @hoffeldtcom
Read More
Business News

Weekly Money FM Podcasts: Navigating Reit challenges for Mapletree, Changi Business Park

The Straits Times Business News Check out Money FM's best weekly podcasts. Go to Source 17/06/2024 - 00:04 / Twitter: @hoffeldtcom
Read More
Covid-19

UK attractions try to win back visitors as post-Covid ‘revenge spending’ ends

Coronavirus | The Guardian Alton Towers and Legoland owner alters tactics after period of VAT cuts and people spending cash saved during lockdownsThe period of post-Covid “revenge spending” has ended, leaving businesses having to look at different ways to attract customers, the chief operating officer of Merlin Entertainments has said.The term revenge spending was coined to describe how people looked to splash the cash they had saved up during the Covid pandemic on products or experiences that would help make up for time lost to lockdowns. Continue reading... Go to Source 16/06/2024 - 15:00 /Jack Simpson Twitter: @hoffeldtcom
Read More
Covid-19

Anthony Fauci says he turned down pharma jobs while he was Covid chief

Coronavirus | The Guardian Former infectious disease head says big pharma tried to poach him while he was combating coronavirusBefore retiring from his lengthy run as the US government’s top infectious disease doctor, major pharmaceutical companies tried to lure Anthony Fauci away from his post by offering him seven-figure jobs – but he turned them down because he “cared about … the health of the country” too much, he says in a new interview.Fauci’s comments on his loyalty to the National Institute of Allergy and Infectious Diseases (NIAD) – which he directed for 38 years before retiring in December 2022 – come only a couple of weeks after he testified to Congress about receiving “credible death threats” from far-right extremists over his efforts to slow the spread of Covid-19 at the beginning of the pandemic. Continue reading... Go to Source 15/06/2024 - 18:08 /Ramon Antonio Vargas Twitter: @hoffeldtcom
Read More
Covid-19

COVID-19, Ebola, bird flu: What to know about zoonotic diseases

COVID-19 and H5N1 bird flu are both zoonotic, meaning they jumped from animals to humans. How did that happen and how can they infect humans? Go to Source 15/06/2024 - 15:14 /Nathaniel Dove Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Technique improves the reasoning capabilities of large language models

MIT News - Artificial intelligence Large language models like those that power ChatGPT have shown impressive performance on tasks like drafting legal briefs, analyzing the sentiment of customer reviews, or translating documents into different languages.These machine-learning models typically use only natural language to process information and answer queries, which can make it difficult for them to perform tasks that require numerical or symbolic reasoning.For instance, a large language model might be able to memorize and recite a list of recent U.S. presidents and their birthdays, but that same model could fail if asked the question “Which U.S. presidents elected after 1950 were born on a Wednesday?” (The answer is Jimmy Carter.)Researchers from MIT and elsewhere have proposed a new technique that enables large language models to solve natural language, math and data analysis, and symbolic reasoning tasks by generating programs.Their approach, called natural language embedded programs (NLEPs), involves prompting a language model to create and execute a Python program to solve a user’s query, and then output the solution as natural language.They found that NLEPs enabled large language models to achieve higher accuracy on a wide range of reasoning tasks. The approach is also generalizable, which means one NLEP prompt can be reused for multiple tasks.NLEPs also improve transparency, since a user could check the program to see exactly how the model reasoned about the query and fix the program if the model gave a wrong answer.“We want AI to perform complex reasoning in a way that is transparent and trustworthy. There is still a long way to go, but we have shown that combining the capabilities of programming and natural language in large language models is a very good potential first step toward a future where people can fully understand and trust what is going on inside their AI...
Read More
Artificial Intelligence

A creation story told through immersive technology

MIT News - Artificial intelligence In the beginning, as one version of the Haudenosaunee creation story has it, there was only water and sky. According to oral tradition, when the Sky Woman became pregnant, she dropped through a hole in the clouds. While many animals guided her descent as she fell, she eventually found a place on the turtle’s back. They worked together, with the aid of other water creatures, to lift the land from the depths of these primordial waters to create what we now know as our earth.The new immersive experience, “Ne:Kahwistará:ken Kanónhsa’kówa í:se Onkwehonwe,” is a vivid retelling of this creation story by multimedia artist Jackson 2bears, also known as Tékeniyáhsen Ohkwá:ri (Kanien’kehà:ka), the 2022–24 Ida Ely Rubin Artist in Residence at the MIT Center for Art, Science and Technology. “A lot of what drives my work is finding new ways to keep Haudenosaunee teachings and stories alive in our communities, finding new ways to tell them, but also helping with the transmission and transformation of those stories as they are for us, a living part of our cultural practice,” he says. A virtual recreation of the traditional longhouse2bears was first inspired to create a virtual reality version of a longhouse, a traditional Haudenosaunee structure, in collaboration with Thru the RedDoor, an Indigenous-owned media company in Six Nations at the Grand River that 2bears calls home. The longhouse is not only a “functional dwelling,” says 2bears, but an important spiritual and cultural center where creation myths are shared. “While we were developing the project, we were told by one of our knowledge keepers in the community that longhouses aren’t structures, they’re not the materials they’re made out of,” 2bears recalls, “They’re about the people, the Haudenosaunee people. And it’s about our creative cultural practices in that space that make it a sacred place.”The virtual...
Read More
Business News

Starmer banks on economic growth to ‘rebuild Britain’

BBC News Sir Keir Starmer says wealth creation is the top priority of his party's blueprint for government, as he unveils the Labour manifesto. Go to Source 14/06/2024 - 00:59 / Twitter: @hoffeldtcom
Read More
Covid-19

Immunisation rates fall among Australia’s vulnerable as experts blame pandemic misinformation and practical barriers

Coronavirus | The Guardian Below-target levels come after record highs in 2020, with some areas in NSW, Queensland and WA now showing consistently lower vaccination ratesGet our morning and afternoon news emails, free app or daily news podcastImmunisation rates are lagging in Australia’s most vulnerable populations – the very young and old – with experts blaming practical barriers as well as the misinformation and vaccine hesitancy that took off during the Covid-19 pandemic.In 2020 Australia achieved a record high rate of 95.09% five-year-olds fully immunised against infectious diseases, even surpassing the government’s target of 95%, which provides “herd immunity”. Continue reading... Go to Source 13/06/2024 - 18:59 /Natasha May Twitter: @hoffeldtcom
Read More
Business News

Shell is front runner for LNG assets of Temasek-owned Pavilion Energy

The Straits Times Business News Temasek was said to be seeking more than US$2 billion (S$2.69 billion) for the business. Go to Source 13/06/2024 - 03:18 / Twitter: @hoffeldtcom
Read More
Management

Former Express Scripts exec can’t take CVS job, appeals court rules

Human Resources News - Human Resources News Headlines | Bizjournals.com A former president of Express Scripts, the St. Louis-based pharmacy benefits management arm of a Cigna subsidiary, is still barred from taking a job with CVS Health that she was named to over a year ago, according to an appeals court ruling. Go to Source 13/06/2024 - 00:03 /Diana Barr Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Symposium highlights scale of mental health crisis and novel methods of diagnosis and treatment

MIT News - Artificial intelligence Digital technologies, such as smartphones and machine learning, have revolutionized education. At the McGovern Institute for Brain Research’s 2024 Spring Symposium, “Transformational Strategies in Mental Health,” experts from across the sciences — including psychiatry, psychology, neuroscience, computer science, and others — agreed that these technologies could also play a significant role in advancing the diagnosis and treatment of mental health disorders and neurological conditions.Co-hosted by the McGovern Institute, MIT Open Learning, McClean Hospital, the Poitras Center for Psychiatric Disorders Research at MIT, and the Wellcome Trust, the symposium raised the alarm about the rise in mental health challenges and showcased the potential for novel diagnostic and treatment methods.John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology at MIT, kicked off the symposium with a call for an effort on par with the Manhattan Project, which in the 1940s saw leading scientists collaborate to do what seemed impossible. While the challenge of mental health is quite different, Gabrieli stressed, the complexity and urgency of the issue are similar. In his later talk, “How can science serve psychiatry to enhance mental health?,” he noted a 35 percent rise in teen suicide deaths between 1999 and 2000 and, between 2007 and 2015, a 100 percent increase in emergency room visits for youths ages 5 to 18 who experienced a suicide attempt or suicidal ideation.“We have no moral ambiguity, but all of us speaking today are having this meeting in part because we feel this urgency,” said Gabrieli, who is also a professor of brain and cognitive sciences, the director of the Integrated Learning Initiative (MITili) at MIT Open Learning, and a member of the McGovern Institute. "We have to do something together as a community of scientists and partners of all kinds to make a difference.”An...
Read More
Artificial Intelligence

Build a custom UI for Amazon Q Business

AWS Machine Learning Blog Amazon Q is a new generative artificial intelligence (AI)-powered assistant designed for work that can be tailored to your business. Amazon Q can help you get fast, relevant answers to pressing questions, solve problems, generate content, and take actions using the data and expertise found in your company’s information repositories and enterprise systems. When you chat with Amazon Q, it provides immediate, relevant information and advice to help streamline tasks, speed up decision-making, and spark creativity and innovation at work. For more information, see Amazon Q Business, now generally available, helps boost workforce productivity with generative AI. This post demonstrates how to build a custom UI for Amazon Q Business. The customized UI allows you to implement special features like handling feedback, using company brand colors and templates, and using a custom login. It also enables conversing with Amazon Q through an interface personalized to your use case. Solution overview In this solution, we deploy a custom web experience for Amazon Q to deliver quick, accurate, and relevant answers to your business questions on top of an enterprise knowledge base. The following diagram illustrates the solution architecture. The workflow includes the following steps: The user accesses the chatbot application, which is hosted behind an Application Load Balancer. After the user logs in, they’re redirected to the Amazon Cognito login page for authentication. This solution uses an Amazon Cognito user pool as an OAuth-compatible identity provider (IdP), which is required in order to exchange a token with AWS IAM Identity Center and later on interact with the Amazon Q Business APIs. For more information about trusted token issuers and how token exchanges are performed, see Using applications with a trusted token issuer. If you already have an OAuth-compatible IdP, you can use it instead of setting an...
Read More
Artificial Intelligence

Scalable intelligent document processing using Amazon Bedrock

AWS Machine Learning Blog In today’s data-driven business landscape, the ability to efficiently extract and process information from a wide range of documents is crucial for informed decision-making and maintaining a competitive edge. However, traditional document processing workflows often involve complex and time-consuming manual tasks, hindering productivity and scalability. In this post, we discuss an approach that uses the Anthropic Claude 3 Haiku model on Amazon Bedrock to enhance document processing capabilities. Amazon Bedrock is a fully managed service that makes foundation models (FMs) from leading artificial intelligence (AI) startups and Amazon available through an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case. With the Amazon Bedrock serverless experience, you can get started quickly, privately customize FMs with your own data, and integrate and deploy them into your applications using the AWS tools without having to manage any infrastructure. At the heart of this solution lies the Anthropic Claude 3 Haiku model, the fastest and most affordable model in its intelligence class. With state-of-the-art vision capabilities and strong performance on industry benchmarks, Anthropic Claude 3 Haiku is a versatile solution for a wide range of enterprise applications. By using the advanced natural language processing (NLP) capabilities of Anthropic Claude 3 Haiku, our intelligent document processing (IDP) solution can extract valuable data directly from images, eliminating the need for complex postprocessing. Scalable and efficient data extraction Our solution overcomes the traditional limitations of document processing by addressing the following key challenges: Simple prompt-based extraction – This solution allows you to define the specific data you need to extract from the documents through intuitive prompts. The Anthropic Claude 3 Haiku model then processes the documents and returns the desired information, streamlining the entire workflow. Handling larger file...
Read More
Artificial Intelligence

Use weather data to improve forecasts with Amazon SageMaker Canvas

AWS Machine Learning Blog Photo by Zbynek Burival on Unsplash Time series forecasting is a specific machine learning (ML) discipline that enables organizations to make informed planning decisions. The main idea is to supply historic data to an ML algorithm that can identify patterns from the past and then use those patterns to estimate likely values about unseen periods in the future. Amazon has a long heritage of using time series forecasting, dating back to the early days of having to meet mail-order book demand. Fast forward more than a quarter century and advanced forecasting using modern ML algorithms is offered to customers through Amazon SageMaker Canvas, a no-code workspace for all phases of ML. SageMaker Canvas enables you to prepare data using natural language, build and train highly accurate models, generate predictions, and deploy models to production—all without writing a single line of code. In this post, we describe how to use weather data to build and implement a forecasting cycle that you can use to elevate your business’ planning capabilities. Business use cases for time series forecasting Today, companies of every size and industry who invest in forecasting capabilities can improve outcomes—whether measured financially or in customer satisfaction—compared to using intuition-based estimation. Regardless of industry, every customer desires highly accurate models that can maximize their outcome. Here, accuracy means that future estimates produced by the ML model end up being as close as possible to the actual future. If the ML model estimates either too high or too low, it can reduce the effectiveness the business was hoping to achieve. To maximize accuracy, ML models benefit from rich, quality data that reflects demand patterns, including cycles of highs and lows, and periods of stability. The shape of these historic patterns may be driven by several factors. Examples include...
Read More
Artificial Intelligence

Researchers use large language models to help robots navigate

MIT News - Artificial intelligence Someday, you may want your home robot to carry a load of dirty clothes downstairs and deposit them in the washing machine in the far-left corner of the basement. The robot will need to combine your instructions with its visual observations to determine the steps it should take to complete this task.For an AI agent, this is easier said than done. Current approaches often utilize multiple hand-crafted machine-learning models to tackle different parts of the task, which require a great deal of human effort and expertise to build. These methods, which use visual representations to directly make navigation decisions, demand massive amounts of visual data for training, which are often hard to come by.To overcome these challenges, researchers from MIT and the MIT-IBM Watson AI Lab devised a navigation method that converts visual representations into pieces of language, which are then fed into one large language model that achieves all parts of the multistep navigation task.Rather than encoding visual features from images of a robot’s surroundings as visual representations, which is computationally intensive, their method creates text captions that describe the robot’s point-of-view. A large language model uses the captions to predict the actions a robot should take to fulfill a user’s language-based instructions.Because their method utilizes purely language-based representations, they can use a large language model to efficiently generate a huge amount of synthetic training data.While this approach does not outperform techniques that use visual features, it performs well in situations that lack enough visual data for training. The researchers found that combining their language-based inputs with visual signals leads to better navigation performance.“By purely using language as the perceptual representation, ours is a more straightforward approach. Since all the inputs can be encoded as language, we can generate a human-understandable trajectory,” says Bowen...
Read More
Artificial Intelligence

Making climate models relevant for local decision-makers

MIT News - Artificial intelligence Climate models are a key technology in predicting the impacts of climate change. By running simulations of the Earth’s climate, scientists and policymakers can estimate conditions like sea level rise, flooding, and rising temperatures, and make decisions about how to appropriately respond. But current climate models struggle to provide this information quickly or affordably enough to be useful on smaller scales, such as the size of a city. Now, authors of a new open-access paper published in the Journal of Advances in Modeling Earth Systems have found a method to leverage machine learning to utilize the benefits of current climate models, while reducing the computational costs needed to run them. “It turns the traditional wisdom on its head,” says Sai Ravela, a principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) who wrote the paper with EAPS postdoc Anamitra Saha. Traditional wisdomIn climate modeling, downscaling is the process of using a global climate model with coarse resolution to generate finer details over smaller regions. Imagine a digital picture: A global model is a large picture of the world with a low number of pixels. To downscale, you zoom in on just the section of the photo you want to look at — for example, Boston. But because the original picture was low resolution, the new version is blurry; it doesn’t give enough detail to be particularly useful. “If you go from coarse resolution to fine resolution, you have to add information somehow,” explains Saha. Downscaling attempts to add that information back in by filling in the missing pixels. “That addition of information can happen two ways: Either it can come from theory, or it can come from data.” Conventional downscaling often involves using models built on physics (such as the process of air rising, cooling, and condensing, or the landscape of the area), and supplementing it with statistical data taken from historical observations. But this method is computationally taxing: It takes a lot of time and computing...
Read More
Artificial Intelligence

New algorithm discovers language just by watching videos

MIT News - Artificial intelligence Mark Hamilton, an MIT PhD student in electrical engineering and computer science and affiliate of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), wants to use machines to understand how animals communicate. To do that, he set out first to create a system that can learn human language “from scratch.”“Funny enough, the key moment of inspiration came from the movie ‘March of the Penguins.’ There’s a scene where a penguin falls while crossing the ice, and lets out a little belabored groan while getting up. When you watch it, it’s almost obvious that this groan is standing in for a four letter word. This was the moment where we thought, maybe we need to use audio and video to learn language.” says Hamilton. “Is there a way we could let an algorithm watch TV all day and from this figure out what we're talking about?”“Our model, ‘DenseAV,’ aims to learn language by predicting what it’s seeing from what it’s hearing, and vice-versa. For example, if you hear the sound of someone saying ‘bake the cake at 350’ chances are you might be seeing a cake or an oven. To succeed at this audio-video matching game across millions of videos, the model has to learn what people are talking about,” says Hamilton.Once they trained DenseAV on this matching game, Hamilton and his colleagues looked at which pixels the model looked for when it heard a sound. For example, when someone says “dog,” the algorithm immediately starts looking for dogs in the video stream. By seeing which pixels are selected by the algorithm, one can discover what the algorithm thinks a word means.Interestingly, a similar search process happens when DenseAV listens to a dog barking: It searches for a dog in the video stream. “This piqued our interest....
Read More
Artificial Intelligence

Reimagining software development with the Amazon Q Developer Agent

AWS Machine Learning Blog Amazon Q Developer is an AI-powered assistant for software development that reimagines the experience across the entire software development lifecycle, making it faster to build, secure, manage, and optimize applications on or off of AWS. The Amazon Q Developer Agent includes an agent for feature development that automatically implements multi-file features, bug fixes, and unit tests in your integrated development environment (IDE) workspace using natural language input. After you enter your query, the software development agent analyzes your code base and formulates a plan to fulfill the request. You can accept the plan or ask the agent to iterate on it. After the plan is validated, the agent generates the code changes needed to implement the feature you requested. You can then review and accept the code changes or request a revision. Amazon Q Developer uses generative artificial intelligence (AI) to deliver state-of-the-art accuracy for all developers, taking first place on the leaderboard for SWE-bench, a dataset that tests a system’s ability to automatically resolve GitHub issues. This post describes how to get started with the software development agent, gives an overview of how the agent works, and discusses its performance on public benchmarks. We also delve into the process of getting started with the Amazon Q Developer Agent and give an overview of the underlying mechanisms that make it a state-of-the-art feature development agent. Getting started To get started, you need to have an AWS Builder ID or be part of an organization with an AWS IAM Identity Center instance set up that allows you to use Amazon Q. To use Amazon Q Developer Agent for feature development in Visual Studio Code, start by installing the Amazon Q extension. The extension is also available for JetBrains, Visual Studio (in preview), and in the Command Line...
Read More
Talent Management

Psychology in construction: A psychologist shares her insights

Everyone's Blog Posts - RecruitingBlogs Celebrity psychologist and international speaker Charissa Bloomberg has a history of applying her skills in the engineering, mining, and construction industries. Here, she shares her approach, from initial needs analysis to the human element that should never be underestimated. Bloomberg, known for her guest appearances across radio and TV stations, has a passion for integrity and mental health awareness, which she has applied for over a decade in the engineering, mining, and construction industries. Fondly known as the “site shrink”, Bloomberg believes that companies in this niche often forget that it’s people who build our projects and infrastructure. As a site psychologist, she works in close collaboration with the managing director and his team. At times, she is also called upon to advise at EXCO level. “The best time to be roped in is at the start of a project. Later on, it can be tricky to iron out problems when an incompatible team is facing issues due to different management styles. Key aspects to remember include motivation, morale, personality traits, poor leadership, low EQ, integrity, corruption, communication issues, and the importance of adhering to health and safety protocols,” she enthuses. Bloomberg is not afraid to get dirty on site or to counter a foreman who questions why they have been booked for a two-hour strengthening session when they are on a tight schedule with a billion-dollar project. “I just get on with the training, only to find that the engineers and other industry specialists enjoy the sessions and go back to work refreshed; they also claim they are able to take the knowledge with them for application both in their work environment and home lives.” On-site training should be fun, she says, explaining that incorporating role playing, sharing opinions and stories, brainstorming, and even...
Read More
Artificial Intelligence

Get started quickly with AWS Trainium and AWS Inferentia using AWS Neuron DLAMI and AWS Neuron DLC

AWS Machine Learning Blog Starting with the AWS Neuron 2.18 release, you can now launch Neuron DLAMIs (AWS Deep Learning AMIs) and Neuron DLCs (AWS Deep Learning Containers) with the latest released Neuron packages on the same day as the Neuron SDK release. When a Neuron SDK is released, you’ll now be notified of the support for Neuron DLAMIs and Neuron DLCs in the Neuron SDK release notes, with a link to the AWS documentation containing the DLAMI and DLC release notes. In addition, this release introduces a number of features that help improve user experience for Neuron DLAMIs and DLCs. In this post, we walk through some of the support highlights with Neuron 2.18. Neuron DLC and DLAMI overview and announcements The DLAMI is a pre-configured AMI that comes with popular deep learning frameworks like TensorFlow, PyTorch, Apache MXNet, and others pre-installed. This allows machine learning (ML) practitioners to rapidly launch an Amazon Elastic Compute Cloud (Amazon EC2) instance with a ready-to-use deep learning environment, without having to spend time manually installing and configuring the required packages. The DLAMI supports various instance types, including Neuron Trainium and Inferentia powered instances, for accelerated training and inference. AWS DLCs provide a set of Docker images that are pre-installed with deep learning frameworks. The containers are optimized for performance and available in Amazon Elastic Container Registry (Amazon ECR). DLCs make it straightforward to deploy custom ML environments in a containerized manner, while taking advantage of the portability and reproducibility benefits of containers. Multi-Framework DLAMIs The Neuron Multi-Framework DLAMI for Ubuntu 22 provides separate virtual environments for multiple ML frameworks: PyTorch 2.1, PyTorch 1.13, Transformers NeuronX, and TensorFlow 2.10. DLAMI offers you the convenience of having all these popular frameworks readily available in a single AMI, simplifying their setup and reducing the need...
Read More
Artificial Intelligence

Sprinklr improves performance by 20% and reduces cost by 25% for machine learning inference on AWS Graviton3

AWS Machine Learning Blog This is a guest post co-written with Ratnesh Jamidar and Vinayak Trivedi from Sprinklr. Sprinklr’s mission is to unify silos, technology, and teams across large, complex companies. To achieve this, we provide four product suites, Sprinklr Service, Sprinklr Insights, Sprinklr Marketing, and Sprinklr Social, as well as several self-serve offerings. Each of these products are infused with artificial intelligence (AI) capabilities to deliver exceptional customer experience. Sprinklr’s specialized AI models streamline data processing, gather valuable insights, and enable workflows and analytics at scale to drive better decision-making and productivity. In this post, we describe the scale of our AI offerings, the challenges with diverse AI workloads, and how we optimized mixed AI workload inference performance with AWS Graviton3 based c7g instances and achieved 20% throughput improvement, 30% latency reduction, and reduced our cost by 25–30%. Sprinklr’s AI scale and challenges with diverse AI workloads Our purpose-built AI processes unstructured customer experience data from millions of sources, providing actionable insights and improving productivity for customer-facing teams to deliver exceptional experiences at scale. To understand our scaling and cost challenges, let’s look at some representative numbers. Sprinklr’s platform uses thousands of servers that fine-tune and serve over 750 pre-built AI models across over 60 verticals, and run more than 10 billion predictions per day. To deliver a tailored user experience across these verticals, we deploy patented AI models fine-tuned for specific business applications and use nine layers of machine learning (ML) to extract meaning from data across formats: automatic speech recognition, natural language processing, computer vision, network graph analysis, anomaly detection, trends, predictive analysis, natural language generation, and similarity engine. The diverse and rich database of models brings unique challenges for choosing the most efficient deployment infrastructure that gives the best latency and performance. For example, for mixed...
Read More
Artificial Intelligence

New computer vision method helps speed up screening of electronic materials

MIT News - Artificial intelligence Boosting the performance of solar cells, transistors, LEDs, and batteries will require better electronic materials, made from novel compositions that have yet to be discovered.To speed up the search for advanced functional materials, scientists are using AI tools to identify promising materials from hundreds of millions of chemical formulations. In tandem, engineers are building machines that can print hundreds of material samples at a time based on chemical compositions tagged by AI search algorithms.But to date, there’s been no similarly speedy way to confirm that these printed materials actually perform as expected. This last step of material characterization has been a major bottleneck in the pipeline of advanced materials screening.Now, a new computer vision technique developed by MIT engineers significantly speeds up the characterization of newly synthesized electronic materials. The technique automatically analyzes images of printed semiconducting samples and quickly estimates two key electronic properties for each sample: band gap (a measure of electron activation energy) and stability (a measure of longevity).The new technique accurately characterizes electronic materials 85 times faster compared to the standard benchmark approach.The researchers intend to use the technique to speed up the search for promising solar cell materials. They also plan to incorporate the technique into a fully automated materials screening system.“Ultimately, we envision fitting this technique into an autonomous lab of the future,” says MIT graduate student Eunice Aissi. “The whole system would allow us to give a computer a materials problem, have it predict potential compounds, and then run 24-7 making and characterizing those predicted materials until it arrives at the desired solution.”“The application space for these techniques ranges from improving solar energy to transparent electronics and transistors,” adds MIT graduate student Alexander (Aleks) Siemenn. “It really spans the full gamut of where semiconductor materials can benefit society.”Aissi...
Read More
Artificial Intelligence

Code generation using Code Llama 70B and Mixtral 8x7B on Amazon SageMaker

AWS Machine Learning Blog In the ever-evolving landscape of machine learning and artificial intelligence (AI), large language models (LLMs) have emerged as powerful tools for a wide range of natural language processing (NLP) tasks, including code generation. Among these cutting-edge models, Code Llama 70B stands out as a true heavyweight, boasting an impressive 70 billion parameters. Developed by Meta and now available on Amazon SageMaker, this state-of-the-art LLM promises to revolutionize the way developers and data scientists approach coding tasks. What is Code Llama 70B and Mixtral 8x7B? Code Llama 70B is a variant of the Code Llama foundation model (FM), a fine-tuned version of Meta’s renowned Llama 2 model. This massive language model is specifically designed for code generation and understanding, capable of generating code from natural language prompts or existing code snippets. With its 70 billion parameters, Code Llama 70B offers unparalleled performance and versatility, making it a game-changer in the world of AI-assisted coding. Mixtral 8x7B is a state-of-the-art sparse mixture of experts (MoE) foundation model released by Mistral AI. It supports multiple use cases such as text summarization, classification, text generation, and code generation. It is an 8x model, which means it contains eight distinct groups of parameters. The model has about 45 billion total parameters and supports a context length of 32,000 tokens. MoE is a type of neural network architecture that consists of multiple experts” where each expert is a neural network. In the context of transformer models, MoE replaces some feed-forward layers with sparse MoE layers. These layers have a certain number of experts, and a router network selects which experts process each token at each layer. MoE models enable more compute-efficient and faster inference compared to dense models. Key features and capabilities of Code Llama 70B and Mixtral 8x7B include: Code generation:...
Read More
Covid-19

COVID-flu shot offers strong immune response in late-stage trial, Moderna says

Moderna says it combination vaccine to protect against both COVID-19 and influenza generated a stronger immune response in adults 50 and over when compared to separate shots. Go to Source 10/06/2024 - 15:33 / Twitter: @hoffeldtcom
Read More
Covid-19

Moderna combi flu and Covid jab gives better protection, study finds

Coronavirus | The Guardian Clinical trials show Spikevax may bring about higher immune responses than separate inoculations A combined flu and coronavirus vaccine brings about a higher immune response to both diseases than when the vaccines are administered separately, a clinical trial has shown.Moderna, the biotech firm behind the Spikevax vaccine used in NHS booster programmes, is trialling a two-in-one jab that can also protect from the flu. Initial results have shown it may be better at protecting against them than what is now being used. Continue reading... Go to Source 10/06/2024 - 15:21 /Tobi Thomas Health and Inequalities Correspondent Twitter: @hoffeldtcom
Read More
Business News

Gold is getting harder to find as miners struggle to excavate more, World Gold Council says

US Top News and Analysis The gold mining industry is struggling to sustain production growth as deposits of the yellow metal become harder to find, said the World Gold Council. Go to Source 10/06/2024 - 03:31 / Twitter: @hoffeldtcom
Read More
Covid-19

Retiring head of Barrie food bank reflects on challenges of pandemic, jump in demand

After seeing the agency through a global pandemic and an unprecedented jump in demand, the head of Barrie’s Food Bank is retiring. Go to Source 08/06/2024 - 09:13 /Sawyer Bogdan Twitter: @hoffeldtcom
Read More
Business News

Here’s where the jobs are for May 2024 — in one chart

US Top News and Analysis Job growth in May came out surprisingly strong, pushing back on lingering fears of a broader economic slowdown. Go to Source 07/06/2024 - 15:38 / Twitter: @hoffeldtcom
Read More
Psychology

Webinar: NIH’s Definition of a Clinical Trial

NIMH News Feed Experts from the National Institute of Mental Health (NIMH) will provide an overview of NIH clinical trial classifications, with a particular focus on global mental health research. Go to Source 07/06/2024 - 06:13 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Business News

How to do business better by reading ‘non-business’ books

The Straits Times Business News The trick is to know that the best business books tend not to be written for reasons of business. Go to Source 07/06/2024 - 00:24 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

A data-driven approach to making better choices

MIT News - Artificial intelligence Imagine a world in which some important decision — a judge’s sentencing recommendation, a child’s treatment protocol, which person or business should receive a loan — was made more reliable because a well-designed algorithm helped a key decision-maker arrive at a better choice. A new MIT economics course is investigating these interesting possibilities.Class 14.163 (Algorithms and Behavioral Science) is a new cross-disciplinary course focused on behavioral economics, which studies the cognitive capacities and limitations of human beings. The course was co-taught this past spring by assistant professor of economics Ashesh Rambachan and visiting lecturer Sendhil Mullainathan.Rambachan studies the economic applications of machine learning, focusing on algorithmic tools that drive decision-making in the criminal justice system and consumer lending markets. He also develops methods for determining causation using cross-sectional and dynamic data.Mullainathan will soon join the MIT departments of Electrical Engineering and Computer Science and Economics as a professor. His research uses machine learning to understand complex problems in human behavior, social policy, and medicine. Mullainathan co-founded the Abdul Latif Jameel Poverty Action Lab (J-PAL) in 2003.The new course’s goals are both scientific (to understand people) and policy-driven (to improve society by improving decisions). Rambachan believes that machine-learning algorithms provide new tools for both the scientific and applied goals of behavioral economics.“The course investigates the deployment of computer science, artificial intelligence (AI), economics, and machine learning in service of improved outcomes and reduced instances of bias in decision-making,” Rambachan says.There are opportunities, Rambachan believes, for constantly evolving digital tools like AI, machine learning, and large language models (LLMs) to help reshape everything from discriminatory practices in criminal sentencing to health-care outcomes among underserved populations.Students learn how to use machine learning tools with three main objectives: to understand what they do and how they do it, to formalize behavioral economics insights...
Read More
Covid-19

Less grooming and more chores: How life changes when you work from home

Working from home spiked during the pandemic, and changed the way many people work and live. New data from Statistics Canada sheds light on its impacts. Go to Source 06/06/2024 - 19:03 /Uday Rana Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Build RAG applications using Jina Embeddings v2 on Amazon SageMaker JumpStart

AWS Machine Learning Blog Today, we are excited to announce that the Jina Embeddings v2 model, developed by Jina AI, is available for customers through Amazon SageMaker JumpStart to deploy with one click for running model inference. This state-of-the-art model supports an impressive 8,192-tokens context length. You can deploy this model with SageMaker JumpStart, a machine learning (ML) hub with foundation models, built-in algorithms, and pre-built ML solutions that you can deploy with just a few clicks. Text embedding refers to the process of transforming text into numerical representations that reside in a high-dimensional vector space. Text embeddings have a broad range of applications in enterprise artificial intelligence (AI), including the following: Multimodal search for ecommerce Content personalization Recommender systems Data analytics Jina Embeddings v2 is a state-of-the-art collection of text embedding models, trained by Berlin-based Jina AI, that boast high performance on several public benchmarks. In this post, we walk through how to discover and deploy the jina-embeddings-v2 model as part of a Retrieval Augmented Generation (RAG)-based question answering system in SageMaker JumpStart. You can use this tutorial as a starting point for a variety of chatbot-based solutions for customer service, internal support, and question answering systems based on internal and private documents. What is RAG? RAG is the process of optimizing the output of a large language model (LLM) so it references an authoritative knowledge base outside of its training data sources before generating a response. LLMs are trained on vast volumes of data and use billions of parameters to generate original output for tasks like answering questions, translating languages, and completing sentences. RAG extends the already powerful capabilities of LLMs to specific domains or an organization’s internal knowledge base, all without the need to retrain the model. It’s a cost-effective approach to improving LLM output so it...
Read More
Business News

SpaceX’s Starship rocket completes test flight for the first time, successfully splashes down

US Top News and Analysis The fourth Starship test flight completed new milestones as SpaceX continues to advance development of the mammoth vehicle. Go to Source 06/06/2024 - 15:29 / Twitter: @hoffeldtcom
Read More
Covid-19

Australia hit by ‘big wave’ of Covid at same time as increase in flu

Coronavirus | The Guardian Experts say both are at ‘critical point’ of escalation and that people should ensure they are up to date with vaccinationsGet our morning and afternoon news emails, free app or daily news podcastAustralia is experiencing a “big wave” of Covid-19 infections that is coinciding with a rise in ​influenza and other winter illnesses, health authorities and experts are warning.Deakin University’s epidemiology chair, Prof Catherine Bennett, said there was a direct alignment in the rise of Covid-19 and flu across the nation, which were “both at that critical point of takeoff where you see a rapid escalation.”Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup Continue reading... Go to Source 06/06/2024 - 12:33 /Natasha May Twitter: @hoffeldtcom
Read More
Business News

UG Healthcare makes acquisitions in Spain, Germany

The Straits Times Business News The moves aim to drive growth in downstream distribution business in Europe Go to Source 06/06/2024 - 06:14 / Twitter: @hoffeldtcom
Read More
Business News

Ant’s Singapore digital bank Anext eyes growing demand from foreign firms

The Straits Times Business News More than 30 per cent of the bank’s customers were foreign business owners, spanning 78 nationalities, as at the end of May. Go to Source 06/06/2024 - 03:43 / Twitter: @hoffeldtcom
Read More
Business News

US services sector activity rebounds while private payrolls growth slows

The Straits Times Business News The reports paint a mixed picture of an economy that continues to withstand the hefty rate increases Go to Source 06/06/2024 - 03:13 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Mouth-based touchpad enables people living with paralysis to interact with computers

MIT News - Artificial intelligence When Tomás Vega SM ’19 was 5 years old, he began to stutter. The experience gave him an appreciation for the adversity that can come with a disability. It also showed him the power of technology.“A keyboard and a mouse were outlets,” Vega says. “They allowed me to be fluent in the things I did. I was able to transcend my limitations in a way, so I became obsessed with human augmentation and with the concept of cyborgs. I also gained empathy. I think we all have empathy, but we apply it according to our own experiences.”Vega has been using technology to augment human capabilities ever since. He began programming when he was 12. In high school, he helped people manage disabilities including hand impairments and multiple sclerosis. In college, first at the University of California at Berkeley and then at MIT, Vega built technologies that helped people with disabilities live more independently.Today Vega is the co-founder and CEO of Augmental, a startup deploying technology that lets people with movement impairments seamlessly interact with their personal computational devices.Augmental’s first product is the MouthPad, which allows users to control their computer, smartphone, or tablet through tongue and head movements. The MouthPad’s pressure-sensitive touch pad sits on the roof of the mouth, and, working with a pair of motion sensors, translates tongue and head gestures into cursor scrolling and clicks in real time via Bluetooth.“We have a big chunk of the brain that is devoted to controlling the position of the tongue,” Vega explains. “The tongue comprises eight muscles, and most of the muscle fibers are slow-twitch, which means they don’t fatigue as quickly. So, I thought why don’t we leverage all of that?”People with spinal cord injuries are already using the MouthPad every day to interact with...
Read More
Artificial Intelligence

Detect email phishing attempts using Amazon Comprehend

AWS Machine Learning Blog Phishing is the process of attempting to acquire sensitive information such as usernames, passwords and credit card details by masquerading as a trustworthy entity using email, telephone or text messages. There are many types of phishing based on the mode of communication and targeted victims. In an Email phishing attempt, an email is sent as a mode of communication to group of people. There are traditional rule-based approaches to detect email phishing. However, new trends are emerging that are hard to handle with a rule-based approach. There is need to use machine learning (ML) techniques to augment rule-based approaches for email phishing detection. In this post, we show how to use Amazon Comprehend Custom to train and host an ML model to classify if the input email is an phishing attempt or not. Amazon Comprehend is a natural-language processing (NLP) service that uses ML to uncover valuable insights and connections in text. You can use Amazon Comprehend to identify the language of the text; extract key phrases, places, people, brands, or events; understand sentiment about products or services; and identify the main topics from a library of documents. You can customize Amazon Comprehend for your specific requirements without the skillset required to build ML-based NLP solutions. Comprehend Custom builds customized NLP models on your behalf, using training data that you provide. Comprehend Custom supports custom classification and custom entity recognition. Solution overview This post explains how you can use Amazon Comprehend to easily train and host an ML based model to detect phishing attempt. The following diagram shows how the phishing detection works. You can use this solution with your email servers in which emails are passed through this phishing detector. When an email is flagged as a phishing attempt, the email recipient still gets the...
Read More
Business News

Nvidia passes Apple in market cap as second-most valuable public U.S. company

US Top News and Analysis Investors are becoming more comfortable that Nvidia's huge growth in sales to a handful of cloud companies can persist. Go to Source 05/06/2024 - 21:33 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

How Skyflow creates technical content in days using Amazon Bedrock

AWS Machine Learning Blog This guest post is co-written with Manny Silva, Head of Documentation at Skyflow, Inc. Startups move quickly, and engineering is often prioritized over documentation. Unfortunately, this prioritization leads to release cycles that don’t match, where features release but documentation lags behind. This leads to increased support calls and unhappy customers. Skyflow is a data privacy vault provider that makes it effortless to secure sensitive data and enforce privacy policies. Skyflow experienced this growth and documentation challenge in early 2023 as it expanded globally from 8 to 22 AWS Regions, including China and other areas of the world such as Saudi Arabia, Uzbekistan, and Kazakhstan. The documentation team, consisting of only two people, found itself overwhelmed as the engineering team, with over 60 people, updated the product to support the scale and rapid feature release cycles. Given the critical nature of Skyflow’s role as a data privacy company, the stakes were particularly high. Customers entrust Skyflow with their data and expect Skyflow to manage it both securely and accurately. The accuracy of Skyflow’s technical content is paramount to earning and keeping customer trust. Although new features were released every other week, documentation for the features took an average of 3 weeks to complete, including drafting, review, and publication. The following diagram illustrates their content creation workflow. Looking at our documentation workflows, we at Skyflow discovered areas where generative artificial intelligence (AI) could improve our efficiency. Specifically, creating the first draft—often referred to as overcoming the “blank page problem”—is typically the most time-consuming step. The review process could also be long depending on the number of inaccuracies found, leading to additional revisions, additional reviews, and additional delays. Both drafting and reviewing needed to be shorter to make doc target timelines match those of engineering. To do this, Skyflow...
Read More
Business News

Private payrolls growth slows to 152,000 in May, much less than expected, ADP says

US Top News and Analysis Private job creation slowed more than expected in May, signaling further slowing in the labor market. Go to Source 05/06/2024 - 15:53 / Twitter: @hoffeldtcom
Read More
Business News

Australia economy slows to a crawl in Q1 as households feel inflation squeeze

The Straits Times Business News Annual growth dropped to 1.1 per cent, the slowest pace in three decades. Go to Source 05/06/2024 - 06:09 / Twitter: @hoffeldtcom
Read More
Business News

Yoma says land business unit not involved in sale of Thai properties after shares soar

The Straits Times Business News SINGAPORE - Yoma Strategic clarified on June 5 that its land unit, Yoma Land, is not involved in the business of selling properties in Thailand. Go to Source 05/06/2024 - 03:39 / Twitter: @hoffeldtcom
Read More
Management

Aimbridge Hospitality picks up CFO from Velvet Taco

Human Resources News - Human Resources News Headlines | Bizjournals.com The CFO will join Aimbridge Hospitality on July 8. The Plano-based hospitality management company oversees properties including The Statler in downtown Dallas and the Sheraton Fort Worth Downtown Hotel. Go to Source 05/06/2024 - 00:04 /Alexa Reed Twitter: @hoffeldtcom
Read More
Management

Introducing 15Five’s Evolution as a Strategic Command Center for Performance Management

15Five Our newest enhancements include executive insights, strategic action planning, and AI-guided manager support, unlocking the power of existing people data to drive higher performance, engagement and retention. Only 2% of CHROs think conventional performance management practices are actually working.  Ouch. The conventional approach has been stagnant for decades, but the last thing HR teams need right now are more needless tactics added to their already overburdened plates.  That’s why we’re so excited to announce a major platform evolution for 15Five, giving HR teams a powerful new way to understand the intersection of employee performance, engagement, and retention data, implement strategic action plans, and track measurable impact. Every performance review, employee engagement survey and other HR program yields a wealth of untapped insights. 15Five is giving HR teams the power of their data, helping them see and identify what matters most, broker action through managers, and track the impact at every step. A strategic command center for performance management 15Five’s HR Outcomes Dashboard is further evolving as a strategic command center for performance management programs, empowering HR leaders to easily explore their own data and develop strategic action plans with leaders and managers. Our newest capabilities include: Trending insights and data visualizations: Historical trend lines for employee performance, engagement and retention are automatically generated from existing people data. This creates a shared understanding across the entire organization, clarifying what’s working and what’s not. Demographic and performance filters: New filters give HR teams total control over analyzing how HR outcomes vary across demographic attributes such as age, gender, and department, as well as by performance designations and engagement levels. These filters provide deeper insights into specific groups, enabling more targeted and effective HR strategies. Executive dashboards: HR teams can customize and share executive level dashboards with the rest of their leadership...
Read More
Artificial Intelligence

Streamline custom model creation and deployment for Amazon Bedrock with Provisioned Throughput using Terraform

AWS Machine Learning Blog As customers seek to incorporate their corpus of knowledge into their generative artificial intelligence (AI) applications, or to build domain-specific models, their data science teams often want to conduct A/B testing and have repeatable experiments. In this post, we discuss a solution that uses infrastructure as code (IaC) to define the process of retrieving and formatting data for model customization and initiating the model customization. This enables you to version and iterate as needed. With Amazon Bedrock, you can privately and securely customize foundation models (FMs) with your own data to build applications that are specific to your domain, organization, and use case. With custom models, you can create unique user experiences that reflect your company’s style, voice, and services. Amazon Bedrock supports two methods of model customization: Fine-tuning allows you to increase model accuracy by providing your own task-specific labeled training dataset and further specialize your FMs. Continued pre-training allows you to train models using your own unlabeled data in a secure and managed environment and supports customer-managed keys. Continued pre-training helps models become more domain-specific by accumulating more robust knowledge and adaptability—beyond their original training. In this post, we provide guidance on how to create an Amazon Bedrock custom model using HashiCorp Terraform that allows you to automate the process, including preparing datasets used for customization. Terraform is an IaC tool that allows you to manage AWS resources, software as a service (SaaS) resources, datasets, and more, using declarative configuration. Terraform provides the benefits of automation, versioning, and repeatability. Solution overview We use Terraform to download a public dataset from the Hugging Face Hub, convert it to JSONL format, and upload it to an Amazon Simple Storage Service (Amazon S3) bucket with a versioned prefix. We then create an Amazon Bedrock custom model using...
Read More
Artificial Intelligence

Boost productivity with video conferencing transcripts and summaries with the Amazon Chime SDK Meeting Summarizer solution

AWS Machine Learning Blog Businesses today heavily rely on video conferencing platforms for effective communication, collaboration, and decision-making. However, despite the convenience these platforms offer, there are persistent challenges in seamlessly integrating them into existing workflows. One of the major pain points is the lack of comprehensive tools to automate the process of joining meetings, recording discussions, and extracting actionable insights from them. This gap results in inefficiencies, missed opportunities, and limited productivity, hindering the seamless flow of information and decision-making processes within organizations. To address this challenge, we’ve developed the Amazon Chime SDK Meeting Summarizer application deployed with the Amazon Cloud Development Kit (AWS CDK). This application uses an Amazon Chime SDK SIP media application, Amazon Transcribe, and Amazon Bedrock to seamlessly join meetings, record meeting audio, and process recordings for transcription and summarization. By integrating these services programmatically through the AWS CDK, we aim to streamline the meeting workflow, empower users with actionable insights, and drive better decision-making outcomes. Our solution currently integrates with popular platforms such as Amazon Chime, Zoom, Cisco Webex, Microsoft Teams, and Google Meet. In addition to deploying the solution, we’ll also teach you the intricacies of prompt engineering in this post. We guide you through addressing parsing and information extraction challenges, including speaker diarization, call scheduling, summarization, and transcript cleaning. Through detailed instructions and structured approaches tailored to each use case, we illustrate the effectiveness of Amazon Bedrock, powered by Anthropic Claude models. Solution overview The following infrastructure diagram provides an overview of the AWS services that are used to create this meeting summarization bot. The core services used in this solution are: An Amazon Chime SDK SIP Media Application is used to dial into the meeting and record meeting audio Amazon Transcribe is used to perform speech-to-text processing of the recorded audio,...
Read More
Covid-19

No value for money in N.B. use of travel nurses, says auditor general

The province's auditor general said the roughly $173 million the province spent on travel nurses was not justified and didn't correlate to COVID-19-related staff vacancies. Go to Source 04/06/2024 - 15:48 / Twitter: @hoffeldtcom
Read More
Covid-19

Covid charity scam trial juror says she was given bag with $120,000 cash to acquit defendants

Coronavirus | The Guardian Juror reported she was offered money to acquit seven charged with stealing more than $40m from program meant to feed childrenA federal juror was dismissed from duty on Monday after reporting that a woman dropped a bag of $120,000 in cash at her home – and offered her more money if she would vote to acquit seven people charged with stealing more than $40m from a program meant to feed children during the pandemic.“This is completely beyond the pale,” said Joseph Thompson, assistant US attorney, in court on Monday. “This is outrageous behavior. This is stuff that happens in mob movies.” Continue reading... Go to Source 04/06/2024 - 15:48 /Associated Press Twitter: @hoffeldtcom
Read More
Covid-19

Fauci describes ‘credible death threats’ for overseeing US Covid-19 response

Coronavirus | The Guardian Doctor, who was head of infectious diseases unit during height of the pandemic, tells Congress he and his family still get harassedAnthony Fauci, the former head of the US infectious diseases unit, has received “credible death threats” stemming from his time overseeing the nation’s fight against Covid-19, he has told Congress.Fauci, who was director of the National Institute of Allergy and Infectious Diseases during the height of attempts to halt the spread of the virus, told a hearing on Capitol Hill that the threats had continued until the present day, even though he retired in 2022. Continue reading... Go to Source 04/06/2024 - 00:04 /Robert Tait in Washington Twitter: @hoffeldtcom
Read More
Business News

Why this entrepreneur chose to spend up to $120k a year on a community initiative

The Straits Times Business News The founder of Repair Kopitiam, which gives broken items a second life, shares why he’s against “maximising profits”. Go to Source 04/06/2024 - 00:03 / Twitter: @hoffeldtcom
Read More
1 5 6 7 8 9 45

The messages, the text and the photo is belonging to the one who sends out the RSS feed or related to the sender.

error: Content is protected !!